[jira] [Work logged] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?focusedWorklogId=503560=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503560
 ]

ASF GitHub Bot logged work on HADOOP-16649:
---

Author: ASF GitHub Bot
Created on: 22/Oct/20 05:58
Start Date: 22/Oct/20 05:58
Worklog Time Spent: 10m 
  Work Description: aw-was-here edited a comment on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-714249153


   Just to save everyone a lot of time and suffering:
   
   This approach will break a lot of things in very unexpected ways  (doing a 
search on everywhere hadoop_add_params is called should make this clear).  
hadoop_add_param was specifically built for partial matches because the 
HADOOP_OPTS command line can't really do exact matches and this was a quick way 
to prevent duplicate options.  The unit test failure in  
hadoop_finalize_hadoop_heap  was intended to provide a hint that "yar they be 
dragons here." I should have written better tests, but given it took like 2 
years just to get most of this code in over the total @#$@#$ that was in hadoop 
2.x ...
   
   When I wrote the code originally, we didn't have a need for exact matches 
anywhere (HADOOP_OPTIONAL_TOOLS wasn't written yet).  It was written and 
committed to 3.x. Then the HADOOP_OPTIONAL_TOOLS code was written but that 
would be the only place where an exact match would be useful and we didn't have 
any sooo... I just re-used hadoop_add_param with the (clearly faulty) 
assumption that people would test their code on Hadoop 3.x.   But since the 
azure team didn't bother to test with hadoop 3.x until it was too late...  At 
this point, I was getting tired of the Hadoop politics and bailed, leaving this 
furball hanging around.
   
   Anyway, the *real* fix for this is to convert HADOOP_OPTIONAL_TOOLS to an 
array and then do an exact match, looping over the array. I think there is code 
to do that now.  Might need some new helper code to do comma-delimited -> array 
but that shouldn't be hard.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503560)
Time Spent: 1h 20m  (was: 1h 10m)

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
> DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?focusedWorklogId=503559=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503559
 ]

ASF GitHub Bot logged work on HADOOP-16649:
---

Author: ASF GitHub Bot
Created on: 22/Oct/20 05:57
Start Date: 22/Oct/20 05:57
Worklog Time Spent: 10m 
  Work Description: aw-was-here commented on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-714249153


   Just to save everyone a lot of time and suffering:
   
   This approach will break a lot of things in very unexpected ways  (doing a 
search on everywhere hadoop_add_params is called should make this clear).  
hadoop_add_param was specifically built for partial matches because the 
HADOOP_OPTS command line can't really do exact matches and this was a quick way 
to prevent duplicate options.  The unit test failure in  
hadoop_finalize_hadoop_heap  was intended to provide a hint that "yar they be 
dragons here." I should have written better tests, but given it took like 2 
years just to get most of this code in over the total @#$@#$ that was in hadoop 
2.x ...
   
   When I wrote the code originally, we didn't have a need for exact matches 
anywhere (HADOOP_OPTIONAL_TOOLS wasn't written yet).  It was written and 
committed to 3.x. Then the HADOOP_OPTIONAL_TOOLS code was written but that 
would be the only place where an exact match would be useful and we didn't have 
any sooo... I just re-used hadoop_add_param with the (clearly faulty) 
assumption that people would test their code on Hadoop 3.x.   But since the 
azure team didn't bother to test with hadoop 3.x until it was too late...  At 
this point, I was getting tired of the Hadoop politics and bailed, leaving this 
furball hanging around.
   
   Anyway, the *real* fix for this is to create a new function that converts 
HADOOP_OPTIONAL_TOOLS to an array and then do an exact match, looping over the 
array. I think there is code to do that now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503559)
Time Spent: 1h 10m  (was: 1h)

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
> DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aw-was-here commented on pull request #2385: HADOOP-16649. hadoop_add_param function : change regexp test by iterative equality test

2020-10-21 Thread GitBox


aw-was-here commented on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-714249153


   Just to save everyone a lot of time and suffering:
   
   This approach will break a lot of things in very unexpected ways  (doing a 
search on everywhere hadoop_add_params is called should make this clear).  
hadoop_add_param was specifically built for partial matches because the 
HADOOP_OPTS command line can't really do exact matches and this was a quick way 
to prevent duplicate options.  The unit test failure in  
hadoop_finalize_hadoop_heap  was intended to provide a hint that "yar they be 
dragons here." I should have written better tests, but given it took like 2 
years just to get most of this code in over the total @#$@#$ that was in hadoop 
2.x ...
   
   When I wrote the code originally, we didn't have a need for exact matches 
anywhere (HADOOP_OPTIONAL_TOOLS wasn't written yet).  It was written and 
committed to 3.x. Then the HADOOP_OPTIONAL_TOOLS code was written but that 
would be the only place where an exact match would be useful and we didn't have 
any sooo... I just re-used hadoop_add_param with the (clearly faulty) 
assumption that people would test their code on Hadoop 3.x.   But since the 
azure team didn't bother to test with hadoop 3.x until it was too late...  At 
this point, I was getting tired of the Hadoop politics and bailed, leaving this 
furball hanging around.
   
   Anyway, the *real* fix for this is to create a new function that converts 
HADOOP_OPTIONAL_TOOLS to an array and then do an exact match, looping over the 
array. I think there is code to do that now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aw-was-here edited a comment on pull request #2385: HADOOP-16649. hadoop_add_param function : change regexp test by iterative equality test

2020-10-21 Thread GitBox


aw-was-here edited a comment on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-714249153


   Just to save everyone a lot of time and suffering:
   
   This approach will break a lot of things in very unexpected ways  (doing a 
search on everywhere hadoop_add_params is called should make this clear).  
hadoop_add_param was specifically built for partial matches because the 
HADOOP_OPTS command line can't really do exact matches and this was a quick way 
to prevent duplicate options.  The unit test failure in  
hadoop_finalize_hadoop_heap  was intended to provide a hint that "yar they be 
dragons here." I should have written better tests, but given it took like 2 
years just to get most of this code in over the total @#$@#$ that was in hadoop 
2.x ...
   
   When I wrote the code originally, we didn't have a need for exact matches 
anywhere (HADOOP_OPTIONAL_TOOLS wasn't written yet).  It was written and 
committed to 3.x. Then the HADOOP_OPTIONAL_TOOLS code was written but that 
would be the only place where an exact match would be useful and we didn't have 
any sooo... I just re-used hadoop_add_param with the (clearly faulty) 
assumption that people would test their code on Hadoop 3.x.   But since the 
azure team didn't bother to test with hadoop 3.x until it was too late...  At 
this point, I was getting tired of the Hadoop politics and bailed, leaving this 
furball hanging around.
   
   Anyway, the *real* fix for this is to convert HADOOP_OPTIONAL_TOOLS to an 
array and then do an exact match, looping over the array. I think there is code 
to do that now.  Might need some new helper code to do comma-delimited -> array 
but that shouldn't be hard.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2373: HDFS-15616. [SBN] Disable Observers to trigger edit Log roll

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2373:
URL: https://github.com/apache/hadoop/pull/2373#issuecomment-714207392


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  3s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 116m 25s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2373/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 202m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.server.datanode.TestBlockReplacement |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.mover.TestMover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2373/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2373 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f989886f904a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[GitHub] [hadoop] huangtianhua commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-21 Thread GitBox


huangtianhua commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-714146440


   @liuml07 Hi, sorry to disturb you. Would you help to review this pr? It 
changes the ordinal of nvdimm as the last storage type. And we will propose 
another pr to bump up the NamenodeLayoutVersion. Thanks very much.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-714127360


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  2s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 47s |  |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  19m 14s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 36 new + 136 unchanged - 
36 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  19m 14s |  |  the patch passed  |
   | -1 :x: |  javac  |  19m 14s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2051 unchanged - 
1 fixed = 2053 total (was 2052)  |
   | +1 :green_heart: |  compile  |  17m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  17m 28s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 11 new + 161 
unchanged - 11 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  17m 28s |  |  the patch passed  |
   | -1 :x: |  javac  |  17m 28s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 1947 
unchanged - 0 fixed = 1948 total (was 1947)  |
   | -0 :warning: |  checkstyle  |   3m 17s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 132 unchanged - 1 fixed = 134 total (was 
133)  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 46s |  |  There were no new 
shelldocs issues.  |
   | -1 :x: |  whitespace  |   0m  0s | 

[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=503468=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503468
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 22/Oct/20 01:18
Start Date: 22/Oct/20 01:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-714127360


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  2s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 47s |  |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  19m 14s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 36 new + 136 unchanged - 
36 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  19m 14s |  |  the patch passed  |
   | -1 :x: |  javac  |  19m 14s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2051 unchanged - 
1 fixed = 2053 total (was 2052)  |
   | +1 :green_heart: |  compile  |  17m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  17m 28s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 11 new + 161 
unchanged - 11 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  17m 28s |  |  the patch passed  |
   | -1 :x: |  javac  |  17m 28s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 1947 
unchanged - 0 fixed = 1948 total (was 1947)  |
   | -0 :warning: |  checkstyle  |   3m 17s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/10/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 132 unchanged - 1 fixed = 

[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=503467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503467
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 22/Oct/20 01:17
Start Date: 22/Oct/20 01:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-714125530


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 13s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 52s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 45s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 24 new + 148 unchanged - 
24 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  20m 45s |  |  the patch passed  |
   | -1 :x: |  javac  |  20m 45s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2051 unchanged - 
1 fixed = 2053 total (was 2052)  |
   | +1 :green_heart: |  compile  |  18m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 23s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 29 new + 143 
unchanged - 29 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  18m 23s |  |  the patch passed  |
   | -1 :x: |  javac  |  18m 23s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 1947 
unchanged - 0 fixed = 1948 total (was 1947)  |
   | -0 :warning: |  checkstyle  |   3m 21s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 132 unchanged - 1 fixed = 134 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-714125530


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 13s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 52s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 45s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 24 new + 148 unchanged - 
24 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  20m 45s |  |  the patch passed  |
   | -1 :x: |  javac  |  20m 45s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2051 unchanged - 
1 fixed = 2053 total (was 2052)  |
   | +1 :green_heart: |  compile  |  18m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 23s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 29 new + 143 
unchanged - 29 fixed = 172 total (was 172)  |
   | +1 :green_heart: |  golang  |  18m 23s |  |  the patch passed  |
   | -1 :x: |  javac  |  18m 23s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 1947 
unchanged - 0 fixed = 1948 total (was 1947)  |
   | -0 :warning: |  checkstyle  |   3m 21s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/9/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 132 unchanged - 1 fixed = 134 total (was 
133)  |
   | +1 :green_heart: |  mvnsite  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 34s |  |  There were no new 
shelldocs issues.  |
   | -1 :x: |  whitespace  |   0m  0s | 

[GitHub] [hadoop] Jing9 commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-10-21 Thread GitBox


Jing9 commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-713921549


   Thanks for working on this, Leon! The patch looks good to me in general. 
Still have a couple of questions for discussion:
   1. To add percentage and mount in FsVolumeImpl may not be very clear. How 
about we add a new wrapper class for the group of volumes and indicating their 
common mount and capacity distribution? 
   2. "reserved" is shared by the FsVolumeImpl instances on the same mount. 
Thus we need to verify if the reserved space of the mount may be counted twice.
   3. Later update of the DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE 
configuration may affect the capacity usage percentage calculation, although 
this scenario is rare in practice.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=503424=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503424
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 22:19
Start Date: 21/Oct/20 22:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-713909641


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 15s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 24s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 37s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 17s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 441 unchanged - 1 fixed = 446 total (was 
442)  |
   | +1 :green_heart: |  mvnsite  |   4m  9s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/whitespace-eol.txt)
 |  The patch has 9 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 45s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
88 unchanged - 0 fixed = 89 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   6m 52s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 22s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  13m 36s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 44s |  |  hadoop-aws in the patch passed. 
 |
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #2168: HADOOP-16202. Enhance/Stabilize openFile()

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-713909641


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 15s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 24s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 37s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 17s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 441 unchanged - 1 fixed = 446 total (was 
442)  |
   | +1 :green_heart: |  mvnsite  |   4m  9s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/whitespace-eol.txt)
 |  The patch has 9 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 45s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
88 unchanged - 0 fixed = 89 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   6m 52s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 22s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  13m 36s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 44s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 244m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2168 |
   | Optional 

[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=503420=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503420
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 21:56
Start Date: 21/Oct/20 21:56
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-713900131


   kindly ping @steveloughran 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503420)
Time Spent: 6h 20m  (was: 6h 10m)

> Using lz4-java in Lz4Codec
> --
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and 
> they have to be installed separately on each node of the clusters, container 
> images, or local test environments which adds huge complexities from 
> deployment point of view. In some environments, it requires compiling the 
> natives from sources which is non-trivial. Also, this approach is platform 
> dependent; the binary may not work in different platform, so it requires 
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and 
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which 
> is JNI-based implementation. It contains native binaries in jar file, and it 
> can automatically load the native binaries into JVM from jar without any 
> setup. If a native implementation can not be found for a platform, it can 
> fallback to pure-java implementation of lz4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-21 Thread GitBox


sunchao commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-713900131


   kindly ping @steveloughran 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=503417=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503417
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 21:52
Start Date: 21/Oct/20 21:52
Worklog Time Spent: 10m 
  Work Description: viirya commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-713898244


   @dbtsai Resolved. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503417)
Time Spent: 6h 10m  (was: 6h)

> Using lz4-java in Lz4Codec
> --
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and 
> they have to be installed separately on each node of the clusters, container 
> images, or local test environments which adds huge complexities from 
> deployment point of view. In some environments, it requires compiling the 
> natives from sources which is non-trivial. Also, this approach is platform 
> dependent; the binary may not work in different platform, so it requires 
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and 
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which 
> is JNI-based implementation. It contains native binaries in jar file, and it 
> can automatically load the native binaries into JVM from jar without any 
> setup. If a native implementation can not be found for a platform, it can 
> fallback to pure-java implementation of lz4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-21 Thread GitBox


viirya commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-713898244


   @dbtsai Resolved. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17244) HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17244?focusedWorklogId=503409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503409
 ]

ASF GitHub Bot logged work on HADOOP-17244:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 21:43
Start Date: 21/Oct/20 21:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713894510


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 18s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  7s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  27m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  23m 18s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  3s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/8/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 9 new + 63 unchanged - 0 fixed = 72 total (was 
63)  |
   | +1 :green_heart: |  mvnsite  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m  3s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 22s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 37s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2310 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 22593d2bfa84 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2310: HADOOP-17244. S3A directory delete tombstones dir markers prematurely

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713894510


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 18s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  7s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  27m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  23m 18s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  3s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/8/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 9 new + 63 unchanged - 0 fixed = 72 total (was 
63)  |
   | +1 :green_heart: |  mvnsite  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m  3s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 22s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 37s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2310 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 22593d2bfa84 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/8/testReport/ |
   | Max. process+thread count | 1355 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 

[GitHub] [hadoop] dbtsai commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-21 Thread GitBox


dbtsai commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-713864391


   @viirya can you resolve the conflict? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=503388=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503388
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 20:39
Start Date: 21/Oct/20 20:39
Worklog Time Spent: 10m 
  Work Description: dbtsai commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-713864391


   @viirya can you resolve the conflict? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503388)
Time Spent: 6h  (was: 5h 50m)

> Using lz4-java in Lz4Codec
> --
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and 
> they have to be installed separately on each node of the clusters, container 
> images, or local test environments which adds huge complexities from 
> deployment point of view. In some environments, it requires compiling the 
> natives from sources which is non-trivial. Also, this approach is platform 
> dependent; the binary may not work in different platform, so it requires 
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and 
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which 
> is JNI-based implementation. It contains native binaries in jar file, and it 
> can automatically load the native binaries into JVM from jar without any 
> setup. If a native implementation can not be found for a platform, it can 
> fallback to pure-java implementation of lz4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17321) Improve job submitter framework path handling

2020-10-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218562#comment-17218562
 ] 

Ahmed Hussein commented on HADOOP-17321:


My bad, I did not see HADOOP-17119 .

> Improve job submitter framework path handling
> -
>
> Key: HADOOP-17321
> URL: https://issues.apache.org/jira/browse/HADOOP-17321
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: refactor
> Attachments: HADOOP-17321.001.patch
>
>
> [~daryn] pointed out that {{HttpServer2#bindForPortRange(}}) handles IOE then 
> check whether the exception is instance of {{BindException}}.
> This Jira improves the handling of the Exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17321) Improve job submitter framework path handling

2020-10-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17321:
---
Release Note: I did not see HADOOP-17119
  Resolution: Invalid
  Status: Resolved  (was: Patch Available)

> Improve job submitter framework path handling
> -
>
> Key: HADOOP-17321
> URL: https://issues.apache.org/jira/browse/HADOOP-17321
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: refactor
> Attachments: HADOOP-17321.001.patch
>
>
> [~daryn] pointed out that {{HttpServer2#bindForPortRange(}}) handles IOE then 
> check whether the exception is instance of {{BindException}}.
> This Jira improves the handling of the Exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2020-10-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218506#comment-17218506
 ] 

Ahmed Hussein commented on HADOOP-17126:


{quote}I don't want the overhead of either a lambda or a string when testing 
for a condition that should rarely if ever happen.
{quote}

The patch is not introducing that overhead. The overhead of the constructing 
the arguments is the responsibility of the caller, right?
If the {{Vararg}} is passed by the caller expecting that the string is 
evaluated by {{Preconditions}}, then it should be handled by the {{callee}}.
Of course, unless we go through the entire code and change the arguments sent 
to the {{Preconditions.checkNotNull}}.

{quote}Preconditions is under the Apache License so why not copy it to 
org.apache.hadoop.util.Preconditions and just remove the unnecessary annotation 
imports?{quote}

I checked that option. {{org.apache.hadoop.util.Preconditions}} has a different 
behavior/signatures compared to guava preconditions. It would break the code.

> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17321) Improve job submitter framework path handling

2020-10-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218503#comment-17218503
 ] 

Hadoop QA commented on HADOOP-17321:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
26s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 25m 
57s{color} | 
[/branch-mvninstall-root.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-mvninstall-root.txt]
 | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
30s{color} | 
[/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt]
 | {color:red} root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
29s{color} | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt]
 | {color:red} root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | 
[/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt]
 | {color:orange} The patch fails to run checkstyle in hadoop-common {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt]
 | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
1m 35s{color} |  | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt]
 | {color:red} hadoop-common in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt]
 | {color:red} hadoop-common in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
8s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | 
[/branch-findbugs-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt]
 | {color:red} hadoop-common in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/102/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt]
 | {color:red} hadoop-common in the patch failed. {color} |
| 

[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2020-10-21 Thread Daryn Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218496#comment-17218496
 ] 

Daryn Sharp commented on HADOOP-17126:
--

I don't want the overhead of either a lambda or a string when testing for a 
condition that should rarely if ever happen.

Preconditions is under the Apache License so why not copy it to 
org.apache.hadoop.util.Preconditions and just remove the unnecessary annotation 
imports?  Then conversion is straightforward with minimal hassle/risk of merge 
conflicts in the various branches:

fgrep -rl "com.google.common.base.Preconditions" $HADOOP_TREE | perl -pi -e 
's/import\s+com.google.common.base.Preconditions/import 
org.apache.hadoop.util.Preconditions/'

> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2020-10-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218489#comment-17218489
 ] 

Ahmed Hussein commented on HADOOP-17126:


[~daryn], Thanks for the feedback

The code inside hadoop uses calls such as:
{code:java}
  public static  T checkNotNull(final T obj, final String message, final 
Object... values)
{code}

The above signature is not offered by Objects.requireNonNull().
The problem is that I could not use {{String.format()}} directly because that 
would change the semantic of the code. Therefore, Exceptions from 
{{String.format()}} has to be suppressed.

The question is: deferring the evaluation of the message is a traedoff between 
the cost of constructing lambda vs constructing a string object. Which one is 
more expensive?



> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2373: HDFS-15616. [SBN] Disable Observers to trigger edit Log roll

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2373:
URL: https://github.com/apache/hadoop/pull/2373#issuecomment-713775945


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 25s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 116m 42s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2373/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m  7s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDFSOutputStream |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestViewDistributedFileSystem |
   |   | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.TestSetrepDecreasing |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2373/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2373 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1b7194d6cd5f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   | Default Java | Private 

[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2020-10-21 Thread Daryn Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218472#comment-17218472
 ] 

Daryn Sharp commented on HADOOP-17126:
--

This is a bit of overkill.  Objects.requireNonNull should be sufficient.  
Lambdas that act as closures are an object instantiation.  It's not acceptable 
from a performance perspective to add object allocation overhead for a null 
check that should rarely if ever fail.

> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17321) Improve job submitter framework path handling

2020-10-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17321:
---
Attachment: HADOOP-17321.001.patch
Status: Patch Available  (was: In Progress)

> Improve job submitter framework path handling
> -
>
> Key: HADOOP-17321
> URL: https://issues.apache.org/jira/browse/HADOOP-17321
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: refactor
> Attachments: HADOOP-17321.001.patch
>
>
> [~daryn] pointed out that {{HttpServer2#bindForPortRange(}}) handles IOE then 
> check whether the exception is instance of {{BindException}}.
> This Jira improves the handling of the Exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=503322=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503322
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 17:45
Start Date: 21/Oct/20 17:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713744063


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 28s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 26s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 40s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/diff-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 3 new + 197 
unchanged - 0 fixed = 200 total (was 197)  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 26s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  8s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2396 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ecc70d53f09d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/testReport/ |
   | Max. process+thread count | 1878 (vs. ulimit of 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713744063


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 28s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 26s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 40s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/diff-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 3 new + 197 
unchanged - 0 fixed = 200 total (was 197)  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 26s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  8s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2396 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ecc70d53f09d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/testReport/ |
   | Max. process+thread count | 1878 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2396/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[jira] [Commented] (HADOOP-17320) Update apache/hadoop:3 to 3.3.0 release

2020-10-21 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218403#comment-17218403
 ] 

Attila Doroszlai commented on HADOOP-17320:
---

I'd guess Infra has set up Docker Hub rules for automatic build on the 
{{docker-hadoop-2}} and {{docker-hadoop-3}} branches.

> Update apache/hadoop:3 to 3.3.0 release
> ---
>
> Key: HADOOP-17320
> URL: https://issues.apache.org/jira/browse/HADOOP-17320
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> {{apache/hadoop:3}} docker image should be updated to the [Hadoop 3.3.0 
> release|https://hadoop.apache.org/release/3.3.0.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17321) Improve job submitter framework path handling

2020-10-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17321 started by Ahmed Hussein.
--
> Improve job submitter framework path handling
> -
>
> Key: HADOOP-17321
> URL: https://issues.apache.org/jira/browse/HADOOP-17321
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> [~daryn] pointed out that {{HttpServer2#bindForPortRange(}}) handles IOE then 
> check whether the exception is instance of {{BindException}}.
> This Jira improves the handling of the Exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17321) Improve job submitter framework path handling

2020-10-21 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17321:
--

 Summary: Improve job submitter framework path handling
 Key: HADOOP-17321
 URL: https://issues.apache.org/jira/browse/HADOOP-17321
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


[~daryn] pointed out that {{HttpServer2#bindForPortRange(}}) handles IOE then 
check whether the exception is instance of {{BindException}}.

This Jira improves the handling of the Exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17320) Update apache/hadoop:3 to 3.3.0 release

2020-10-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218380#comment-17218380
 ] 

Wei-Chiu Chuang commented on HADOOP-17320:
--

Just curious, who has the privilege to push to update the docker image? I 
searched around but don't see it (wiki or doc)

> Update apache/hadoop:3 to 3.3.0 release
> ---
>
> Key: HADOOP-17320
> URL: https://issues.apache.org/jira/browse/HADOOP-17320
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> {{apache/hadoop:3}} docker image should be updated to the [Hadoop 3.3.0 
> release|https://hadoop.apache.org/release/3.3.0.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17319) Update the checkstyle config to ban some guava functions

2020-10-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218373#comment-17218373
 ] 

Ahmed Hussein commented on HADOOP-17319:


Thanks [~aajisaka] for following up filing that jira.
I am not a big fan of {{VisibltForTesting}}. It is an empty annotation that 
could be replaced with an annotation class implementation inside hadoop code. 
For example we can call it "{{TestingScope}}".
Otherwise,  {{VisibltForTesting}} does not bring anything to the table but 
loading bunch of irrelevant classes such as {{GwtCompatible, Documented, 
..etc}}.
Did anyone consider doing that for the hadoop repository?


> Update the checkstyle config to ban some guava functions
> 
>
> Key: HADOOP-17319
> URL: https://issues.apache.org/jira/browse/HADOOP-17319
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Some guava functions are banned in HADOOP-17111, HADOOP-17099, and 
> HADOOP-17101 via checkstyle, however, the checkstyle configuration does not 
> work after HADOOP-17288 because the package names have been changed.
> Originally reported by [~ahussein] in HADOOP-17315.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17244) HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17244?focusedWorklogId=503259=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503259
 ]

ASF GitHub Bot logged work on HADOOP-17244:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 15:31
Start Date: 21/Oct/20 15:31
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713661838


   java 11 javadoc and one line of whitespace. will fix



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503259)
Time Spent: 3h 40m  (was: 3.5h)

> HADOOP-17244. S3A directory delete tombstones dir markers prematurely.
> --
>
> Key: HADOOP-17244
> URL: https://issues.apache.org/jira/browse/HADOOP-17244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Test failure: 
> {{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}
> This is repeatable on -Dauth runs (we haven't been running them, have we?)
> Either its from the recent dir marker changes (initial hypothesis) or its 
> been lurking a while and not been picked up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2310: HADOOP-17244. S3A directory delete tombstones dir markers prematurely

2020-10-21 Thread GitBox


steveloughran commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713661838


   java 11 javadoc and one line of whitespace. will fix



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17244) HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17244?focusedWorklogId=503255=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503255
 ]

ASF GitHub Bot logged work on HADOOP-17244:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 15:26
Start Date: 21/Oct/20 15:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713658457


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 23s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 58s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 43s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 18 new + 63 unchanged - 0 fixed = 81 total (was 
63)  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 34s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
88 unchanged - 0 fixed = 89 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   3m 43s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 34s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 188m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2310: HADOOP-17244. S3A directory delete tombstones dir markers prematurely

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713658457


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 23s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 58s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 43s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 18 new + 63 unchanged - 0 fixed = 81 total (was 
63)  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 34s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
88 unchanged - 0 fixed = 89 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   3m 43s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 34s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 188m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2310/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2310 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 6bcb7e49801a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7f8ef76c483 |
   

[jira] [Work logged] (HADOOP-17244) HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17244?focusedWorklogId=503251=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503251
 ]

ASF GitHub Bot logged work on HADOOP-17244:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 15:19
Start Date: 21/Oct/20 15:19
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713653337


   Note: did a retest with `-Dparallel-tests -DtestsThreadCount=4 
-Dmarkers=keep -Ds3guard -Ddynamo  -Dfs.s3a.directory.marker.audit=true 
-Dscale` . All good -didn't even get read() ops with underfull buckets. Home 
network changes there...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503251)
Time Spent: 3h 20m  (was: 3h 10m)

> HADOOP-17244. S3A directory delete tombstones dir markers prematurely.
> --
>
> Key: HADOOP-17244
> URL: https://issues.apache.org/jira/browse/HADOOP-17244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Test failure: 
> {{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}
> This is repeatable on -Dauth runs (we haven't been running them, have we?)
> Either its from the recent dir marker changes (initial hypothesis) or its 
> been lurking a while and not been picked up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2310: HADOOP-17244. S3A directory delete tombstones dir markers prematurely

2020-10-21 Thread GitBox


steveloughran commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713653337


   Note: did a retest with `-Dparallel-tests -DtestsThreadCount=4 
-Dmarkers=keep -Ds3guard -Ddynamo  -Dfs.s3a.directory.marker.audit=true 
-Dscale` . All good -didn't even get read() ops with underfull buckets. Home 
network changes there...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-713650687


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  10m 13s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 45s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 47s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  26m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  22m 21s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m 36s |  |  root: The patch generated 
0 new + 734 unchanged - 1 fixed = 734 total (was 735)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 10s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  11m  9s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/7/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 124m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  11m  2s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 370m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2377 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cfee67e2edb0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[jira] [Created] (HADOOP-17320) Update apache/hadoop:3 to 3.3.0 release

2020-10-21 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HADOOP-17320:
-

 Summary: Update apache/hadoop:3 to 3.3.0 release
 Key: HADOOP-17320
 URL: https://issues.apache.org/jira/browse/HADOOP-17320
 Project: Hadoop Common
  Issue Type: Task
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{apache/hadoop:3}} docker image should be updated to the [Hadoop 3.3.0 
release|https://hadoop.apache.org/release/3.3.0.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=503192=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503192
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 13:36
Start Date: 21/Oct/20 13:36
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713581331


   Also, I plan to move the close in a discard out of the locked area. It's not 
needed and an FS is doing anything there, we don't want to block the other 
threads



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503192)
Time Spent: 1h  (was: 50m)

> FileSystem.get to support slow-to-instantiate FS clients
> 
>
> Key: HADOOP-17313
> URL: https://issues.apache.org/jira/browse/HADOOP-17313
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A recurrent problem in processes with many worker threads (hive, spark etc) 
> is that calling `FileSystem.get(URI-to-object-store)` triggers the creation 
> and then discard of many FS clients -all but one for the same URL. As well as 
> the direct performance hit, this can exacerbate locking problems and make 
> instantiation a lot slower than it would otherwise be.
> This has been observed with the S3A and ABFS connectors.
> The ultimate solution here would probably be something more complicated to 
> ensure that only one thread was ever creating a connector for a given URL 
> -the rest would wait for it to be initialized. This would (a) reduce 
> contention & CPU, IO network load, and (b) reduce the time for all but the 
> first thread to resume processing to that of the remaining time in 
> .initialize(). This would also benefit the S3A connector.
> We'd need something like
> # A (per-user) map of filesystems being created 
> # split createFileSystem into two: instantiateFileSystem and 
> initializeFileSystem
> # each thread to instantiate the FS, put() it into the new map
> # If there was one already, discard the old one and wait for the new one to 
> be ready via a call to Object.wait()
> # If there wasn't an entry, call initializeFileSystem) and then, finally, 
> call Object.notifyAll(), and move it from the map of filesystems being 
> initialized to the map of created filesystems
> This sounds too straightforward to be that simple; the troublespots are 
> probably related to race conditions moving entries between the two maps and 
> making sure that no thread will block on the FS being initialized while it 
> has already been initialized (and so wait() will block forever).
> Rather than seek perfection, it may be safest go for a best-effort 
> optimisation of the #of FS instances created/initialized. That is: its better 
> to maybe create a few more FS instances than needed than it is to block 
> forever.
> Something is doable here, it's just not quick-and-dirty. Testing will be 
> "fun"; probably best to isolate this new logic somewhere where we can 
> simulate slow starts on one thread with many other threads waiting for it.
> A simpler option would be to have a lock on the construction process: only 
> one FS can be instantiated per user at a a time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-21 Thread GitBox


steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713581331


   Also, I plan to move the close in a discard out of the locked area. It's not 
needed and an FS is doing anything there, we don't want to block the other 
threads



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=503181=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503181
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 13:15
Start Date: 21/Oct/20 13:15
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713560550


   yeah, let me get that out the way before I forget about that cache. I'll use 
a counter of discarded instances and test off that. I'll plan some tricks to 
avoid the test being fussy about timing...have a semaphore inside the fake FS 
I'll be instantiating to block its construction, as sleep() is brittle as well 
as slowing down test runs



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503181)
Time Spent: 50m  (was: 40m)

> FileSystem.get to support slow-to-instantiate FS clients
> 
>
> Key: HADOOP-17313
> URL: https://issues.apache.org/jira/browse/HADOOP-17313
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> A recurrent problem in processes with many worker threads (hive, spark etc) 
> is that calling `FileSystem.get(URI-to-object-store)` triggers the creation 
> and then discard of many FS clients -all but one for the same URL. As well as 
> the direct performance hit, this can exacerbate locking problems and make 
> instantiation a lot slower than it would otherwise be.
> This has been observed with the S3A and ABFS connectors.
> The ultimate solution here would probably be something more complicated to 
> ensure that only one thread was ever creating a connector for a given URL 
> -the rest would wait for it to be initialized. This would (a) reduce 
> contention & CPU, IO network load, and (b) reduce the time for all but the 
> first thread to resume processing to that of the remaining time in 
> .initialize(). This would also benefit the S3A connector.
> We'd need something like
> # A (per-user) map of filesystems being created 
> # split createFileSystem into two: instantiateFileSystem and 
> initializeFileSystem
> # each thread to instantiate the FS, put() it into the new map
> # If there was one already, discard the old one and wait for the new one to 
> be ready via a call to Object.wait()
> # If there wasn't an entry, call initializeFileSystem) and then, finally, 
> call Object.notifyAll(), and move it from the map of filesystems being 
> initialized to the map of created filesystems
> This sounds too straightforward to be that simple; the troublespots are 
> probably related to race conditions moving entries between the two maps and 
> making sure that no thread will block on the FS being initialized while it 
> has already been initialized (and so wait() will block forever).
> Rather than seek perfection, it may be safest go for a best-effort 
> optimisation of the #of FS instances created/initialized. That is: its better 
> to maybe create a few more FS instances than needed than it is to block 
> forever.
> Something is doable here, it's just not quick-and-dirty. Testing will be 
> "fun"; probably best to isolate this new logic somewhere where we can 
> simulate slow starts on one thread with many other threads waiting for it.
> A simpler option would be to have a lock on the construction process: only 
> one FS can be instantiated per user at a a time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-21 Thread GitBox


steveloughran commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713560550


   yeah, let me get that out the way before I forget about that cache. I'll use 
a counter of discarded instances and test off that. I'll plan some tricks to 
avoid the test being fussy about timing...have a semaphore inside the fake FS 
I'll be instantiating to block its construction, as sleep() is brittle as well 
as slowing down test runs



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17244) HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17244?focusedWorklogId=503179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503179
 ]

ASF GitHub Bot logged work on HADOOP-17244:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 13:06
Start Date: 21/Oct/20 13:06
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713555144


   thanks. Resubmitting to Yetus the final rebased PR; conflict with the guava 
shading imports are going to be our future



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503179)
Time Spent: 3h 10m  (was: 3h)

> HADOOP-17244. S3A directory delete tombstones dir markers prematurely.
> --
>
> Key: HADOOP-17244
> URL: https://issues.apache.org/jira/browse/HADOOP-17244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Test failure: 
> {{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}
> This is repeatable on -Dauth runs (we haven't been running them, have we?)
> Either its from the recent dir marker changes (initial hypothesis) or its 
> been lurking a while and not been picked up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2310: HADOOP-17244. S3A directory delete tombstones dir markers prematurely

2020-10-21 Thread GitBox


steveloughran commented on pull request #2310:
URL: https://github.com/apache/hadoop/pull/2310#issuecomment-713555144


   thanks. Resubmitting to Yetus the final rebased PR; conflict with the guava 
shading imports are going to be our future



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17305) ITestCustomSigner fails with gcs s3 compatible endpoint.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17305?focusedWorklogId=503162=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503162
 ]

ASF GitHub Bot logged work on HADOOP-17305:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 12:02
Start Date: 21/Oct/20 12:02
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395#issuecomment-713518932


   oops, merged before yetus. Let's see what happens and whether I should 
revert. Yetus isn't going to run this test anyway



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503162)
Time Spent: 50m  (was: 40m)

> ITestCustomSigner fails with gcs s3 compatible endpoint. 
> -
>
> Key: HADOOP-17305
> URL: https://issues.apache.org/jira/browse/HADOOP-17305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> CC [~sseth] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2395: HADOOP-17305 Fix ITestCustomSigner to work with s3 compatible endpoints

2020-10-21 Thread GitBox


steveloughran commented on pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395#issuecomment-713518932


   oops, merged before yetus. Let's see what happens and whether I should 
revert. Yetus isn't going to run this test anyway



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17305) ITestCustomSigner fails with gcs s3 compatible endpoint.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17305?focusedWorklogId=503161=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503161
 ]

ASF GitHub Bot logged work on HADOOP-17305:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 12:01
Start Date: 21/Oct/20 12:01
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503161)
Time Spent: 40m  (was: 0.5h)

> ITestCustomSigner fails with gcs s3 compatible endpoint. 
> -
>
> Key: HADOOP-17305
> URL: https://issues.apache.org/jira/browse/HADOOP-17305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CC [~sseth] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2395: HADOOP-17305 Fix ITestCustomSigner to work with s3 compatible endpoints

2020-10-21 Thread GitBox


steveloughran merged pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17305) ITestCustomSigner fails with gcs s3 compatible endpoint.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17305?focusedWorklogId=503126=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503126
 ]

ASF GitHub Bot logged work on HADOOP-17305:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 10:15
Start Date: 21/Oct/20 10:15
Worklog Time Spent: 10m 
  Work Description: bgaborg commented on pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395#issuecomment-713464790


   change makes sense to me, we don't need to have this specific client created 
here. 
   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503126)
Time Spent: 0.5h  (was: 20m)

> ITestCustomSigner fails with gcs s3 compatible endpoint. 
> -
>
> Key: HADOOP-17305
> URL: https://issues.apache.org/jira/browse/HADOOP-17305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> CC [~sseth] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on pull request #2395: HADOOP-17305 Fix ITestCustomSigner to work with s3 compatible endpoints

2020-10-21 Thread GitBox


bgaborg commented on pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395#issuecomment-713464790


   change makes sense to me, we don't need to have this specific client created 
here. 
   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17305) ITestCustomSigner fails with gcs s3 compatible endpoint.

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17305?focusedWorklogId=503118=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503118
 ]

ASF GitHub Bot logged work on HADOOP-17305:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 10:11
Start Date: 21/Oct/20 10:11
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395#issuecomment-713462459


   CC @steveloughran  @sidseth 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503118)
Time Spent: 20m  (was: 10m)

> ITestCustomSigner fails with gcs s3 compatible endpoint. 
> -
>
> Key: HADOOP-17305
> URL: https://issues.apache.org/jira/browse/HADOOP-17305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CC [~sseth] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2395: HADOOP-17305 Fix ITestCustomSigner to work with s3 compatible endpoints

2020-10-21 Thread GitBox


mukund-thakur commented on pull request #2395:
URL: https://github.com/apache/hadoop/pull/2395#issuecomment-713462459


   CC @steveloughran  @sidseth 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?focusedWorklogId=503116=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503116
 ]

ASF GitHub Bot logged work on HADOOP-16649:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 10:10
Start Date: 21/Oct/20 10:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-713461538


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  5s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 17s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  70m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2385 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux e8f88d772cb1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a9f42f320 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/5/testReport/ |
   | Max. process+thread count | 424 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/5/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503116)
Time Spent: 1h  (was: 50m)

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2385: HADOOP-16649. hadoop_add_param function : change regexp test by iterative equality test

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-713461538


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  5s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 17s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  70m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2385 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux e8f88d772cb1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a9f42f320 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/5/testReport/ |
   | Max. process+thread count | 424 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/5/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17313) FileSystem.get to support slow-to-instantiate FS clients

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17313?focusedWorklogId=503087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503087
 ]

ASF GitHub Bot logged work on HADOOP-17313:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 09:22
Start Date: 21/Oct/20 09:22
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713434157


   Looks good. Just the pending test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503087)
Time Spent: 40m  (was: 0.5h)

> FileSystem.get to support slow-to-instantiate FS clients
> 
>
> Key: HADOOP-17313
> URL: https://issues.apache.org/jira/browse/HADOOP-17313
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> A recurrent problem in processes with many worker threads (hive, spark etc) 
> is that calling `FileSystem.get(URI-to-object-store)` triggers the creation 
> and then discard of many FS clients -all but one for the same URL. As well as 
> the direct performance hit, this can exacerbate locking problems and make 
> instantiation a lot slower than it would otherwise be.
> This has been observed with the S3A and ABFS connectors.
> The ultimate solution here would probably be something more complicated to 
> ensure that only one thread was ever creating a connector for a given URL 
> -the rest would wait for it to be initialized. This would (a) reduce 
> contention & CPU, IO network load, and (b) reduce the time for all but the 
> first thread to resume processing to that of the remaining time in 
> .initialize(). This would also benefit the S3A connector.
> We'd need something like
> # A (per-user) map of filesystems being created 
> # split createFileSystem into two: instantiateFileSystem and 
> initializeFileSystem
> # each thread to instantiate the FS, put() it into the new map
> # If there was one already, discard the old one and wait for the new one to 
> be ready via a call to Object.wait()
> # If there wasn't an entry, call initializeFileSystem) and then, finally, 
> call Object.notifyAll(), and move it from the map of filesystems being 
> initialized to the map of created filesystems
> This sounds too straightforward to be that simple; the troublespots are 
> probably related to race conditions moving entries between the two maps and 
> making sure that no thread will block on the FS being initialized while it 
> has already been initialized (and so wait() will block forever).
> Rather than seek perfection, it may be safest go for a best-effort 
> optimisation of the #of FS instances created/initialized. That is: its better 
> to maybe create a few more FS instances than needed than it is to block 
> forever.
> Something is doable here, it's just not quick-and-dirty. Testing will be 
> "fun"; probably best to isolate this new logic somewhere where we can 
> simulate slow starts on one thread with many other threads waiting for it.
> A simpler option would be to have a lock on the construction process: only 
> one FS can be instantiated per user at a a time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2396: HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients.

2020-10-21 Thread GitBox


mukund-thakur commented on pull request #2396:
URL: https://github.com/apache/hadoop/pull/2396#issuecomment-713434157


   Looks good. Just the pending test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?focusedWorklogId=503055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503055
 ]

ASF GitHub Bot logged work on HADOOP-16649:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 08:01
Start Date: 21/Oct/20 08:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-713386317


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | -1 :x: |  shellcheck  |   0m  6s | 
[/diff-patch-shellcheck.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/artifact/out/diff-patch-shellcheck.txt)
 |  The patch generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20)  |
   | +1 :green_heart: |  shelldocs  |   0m 15s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  71m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2385 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux b7d5a4f95d92 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a9f42f320 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/testReport/ |
   | Max. process+thread count | 439 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503055)
Time Spent: 50m  (was: 40m)

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2385: HADOOP-16649. hadoop_add_param function : change regexp test by iterative equality test

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2385:
URL: https://github.com/apache/hadoop/pull/2385#issuecomment-713386317


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | -1 :x: |  shellcheck  |   0m  6s | 
[/diff-patch-shellcheck.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/artifact/out/diff-patch-shellcheck.txt)
 |  The patch generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20)  |
   | +1 :green_heart: |  shelldocs  |   0m 15s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  71m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2385 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux b7d5a4f95d92 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a9f42f320 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/testReport/ |
   | Max. process+thread count | 439 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2385/4/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17308) WASB : PageBlobOutputStream succeeding hflush even when underlying flush to storage failed

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17308?focusedWorklogId=503054=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503054
 ]

ASF GitHub Bot logged work on HADOOP-17308:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 07:54
Start Date: 21/Oct/20 07:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2392:
URL: https://github.com/apache/hadoop/pull/2392#issuecomment-713382428


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  42m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 16s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2392/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 126m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2392/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2392 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f4e900b82633 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a9f42f320 |
   | Default Java | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2392: HADOOP-17308. WASB : PageBlobOutputStream succeeding flush even when …

2020-10-21 Thread GitBox


hadoop-yetus commented on pull request #2392:
URL: https://github.com/apache/hadoop/pull/2392#issuecomment-713382428


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  42m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 16s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2392/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 126m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2392/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2392 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f4e900b82633 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a9f42f320 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2392/2/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | 

[jira] [Commented] (HADOOP-17315) Use shaded guava in ClientCache.java

2020-10-21 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218157#comment-17218157
 ] 

Akira Ajisaka commented on HADOOP-17315:


Filed HADOOP-17319 to update the config

> Use shaded guava in ClientCache.java
> 
>
> Key: HADOOP-17315
> URL: https://issues.apache.org/jira/browse/HADOOP-17315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> After HADOOP-17288, we should use shaded guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17319) Update the checkstyle config to ban some guava functions

2020-10-21 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17319:
--

 Summary: Update the checkstyle config to ban some guava functions
 Key: HADOOP-17319
 URL: https://issues.apache.org/jira/browse/HADOOP-17319
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


Some guava functions are banned in HADOOP-17111, HADOOP-17099, and HADOOP-17101 
via checkstyle, however, the checkstyle configuration does not work after 
HADOOP-17288 because the package names have been changed.

Originally reported by [~ahussein] in HADOOP-17315.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17315) Use shaded guava in ClientCache.java

2020-10-21 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218153#comment-17218153
 ] 

Akira Ajisaka edited comment on HADOOP-17315 at 10/21/20, 7:26 AM:
---

Thanks [~ahussein] for your comment. I'll create an additional PR.


was (Author: ajisakaa):
Thanks [~ahussein] for your comment. I'll revert this and update the PR.

> Use shaded guava in ClientCache.java
> 
>
> Key: HADOOP-17315
> URL: https://issues.apache.org/jira/browse/HADOOP-17315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> After HADOOP-17288, we should use shaded guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17315) Use shaded guava in ClientCache.java

2020-10-21 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218153#comment-17218153
 ] 

Akira Ajisaka commented on HADOOP-17315:


Thanks [~ahussein] for your comment. I'll revert this and update the PR.

> Use shaded guava in ClientCache.java
> 
>
> Key: HADOOP-17315
> URL: https://issues.apache.org/jira/browse/HADOOP-17315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> After HADOOP-17288, we should use shaded guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17201) Spark job with s3acommitter stuck at the last stage

2020-10-21 Thread James Yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218140#comment-17218140
 ] 

James Yu commented on HADOOP-17201:
---

[~ste...@apache.org] Sorry it may not be easy for us to collect logs because 
the issue doesn't always happen. We are still trying to find a way to trigger 
it reliably.  Also, to your question, we don't know whether the last stuck task 
will eventually finish successfully or fail because we don't have the patient 
and money to let it.

Regarding the bulk delete call failure, I do have a suggestion (learned from 
our AWS representative): when the bulk delete fails and throws the 
MultiObjectDeleteException, try to collect all the failed ones and delete them 
individually as a second chance. This pattern is really useful and has been 
working for us. I think S3aFileSystem should implement the same logic as a 
configurable option, in additional to the current behavior which is if one or 
some objects failed the delete, fail the whole multi objects delete.  That 
should make the deleteObjects() call much more resilient to the S3 intermittent 
delete issue.  

> Spark job with s3acommitter stuck at the last stage
> ---
>
> Key: HADOOP-17201
> URL: https://issues.apache.org/jira/browse/HADOOP-17201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: we are on spark 2.4.5/hadoop 3.2.1 with s3a committer.
> spark.hadoop.fs.s3a.committer.magic.enabled: 'true'
> spark.hadoop.fs.s3a.committer.name: magic
>Reporter: Dyno
>Priority: Major
> Attachments: exec-120.log, exec-125.log, exec-25.log, exec-31.log, 
> exec-36.log, exec-44.log, exec-5.log, exec-64.log, exec-7.log
>
>
> usually our spark job took 1 hour or 2 to finish, occasionally it runs for 
> more than 3 hour and then we know it's stuck and usually the executor has 
> stack like this
> {{
> "Executor task launch worker for task 78620" #265 daemon prio=5 os_prio=0 
> tid=0x7f73e0005000 nid=0x12d waiting on condition [0x7f74cb291000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:349)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:285)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:1457)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:1717)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:2785)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2751)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$finalizeMultipartUpload$1(WriteOperationHelper.java:238)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper$$Lambda$210/1059071691.execute(Unknown
>  Source)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/586859139.execute(Unknown Source)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:226)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:271)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.complete(S3ABlockOutputStream.java:660)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$200(S3ABlockOutputStream.java:521)
>   at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:385)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>   at 
> org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>   at 
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>   at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>   at 
> 

[jira] [Work logged] (HADOOP-17289) ABFS: Testcase failure ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17289?focusedWorklogId=503025=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503025
 ]

ASF GitHub Bot logged work on HADOOP-17289:
---

Author: ASF GitHub Bot
Created on: 21/Oct/20 06:00
Start Date: 21/Oct/20 06:00
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2343:
URL: https://github.com/apache/hadoop/pull/2343#discussion_r509008689



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
##
@@ -223,13 +223,7 @@ public void testAbfsHttpResponseStatistics() throws 
IOException {
*
* bytes_received - This should be equal to bytes sent earlier.
*/
-  long extraCalls = 0;

Review comment:
   getResponsesBeforeTest will be the total number of calls before the open 
and read operations. Basically the removed part was buggy because here we do 
the calculation wrongly by including the extracalls which is already covered in 
the getResponsesBeforeTest.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 503025)
Time Spent: 40m  (was: 0.5h)

> ABFS: Testcase failure 
> ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics
> 
>
> Key: HADOOP-17289
> URL: https://issues.apache.org/jira/browse/HADOOP-17289
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The test case is failing when the fs.azure.test.appendblob.enabled=true.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2343: HADOOP-17289 ABFS: Fixing the test case ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics

2020-10-21 Thread GitBox


bilaharith commented on a change in pull request #2343:
URL: https://github.com/apache/hadoop/pull/2343#discussion_r509008689



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
##
@@ -223,13 +223,7 @@ public void testAbfsHttpResponseStatistics() throws 
IOException {
*
* bytes_received - This should be equal to bytes sent earlier.
*/
-  long extraCalls = 0;

Review comment:
   getResponsesBeforeTest will be the total number of calls before the open 
and read operations. Basically the removed part was buggy because here we do 
the calculation wrongly by including the extracalls which is already covered in 
the getResponsesBeforeTest.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org