[GitHub] [hadoop] jojochuang merged pull request #3255: HDFS-16149.Improve the parameter annotation in FairCallQueue#priorityLevels.

2021-08-03 Thread GitBox


jojochuang merged pull request #3255:
URL: https://github.com/apache/hadoop/pull/3255


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3167: HADOOP-17787. Refactor fetching of credentials in Jenkins

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167#issuecomment-891670441


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  91m 38s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 190m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3167 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux d59d1f49858d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dfc353a568a0eb9c9f25cd8a91b13a9c1ac4061e |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/5/testReport/ |
   | Max. process+thread count | 519 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/5/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?focusedWorklogId=632766&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632766
 ]

ASF GitHub Bot logged work on HADOOP-17787:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 09:03
Start Date: 03/Aug/21 09:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167#issuecomment-891670441


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  91m 38s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 190m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3167 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux d59d1f49858d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dfc353a568a0eb9c9f25cd8a91b13a9c1ac4061e |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/5/testReport/ |
   | Max. process+thread count | 519 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/5/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infras

[GitHub] [hadoop] hadoop-yetus commented on pull request #3238: HADOOP-17816. Run optional CI for changes in C

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-891683671


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 105m 10s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 203m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs compile cc javac golang |
   | uname | Linux dd906117a1fb 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b9b39bd33d17b6edd61ecfbfb362c26cb1368a4a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/3/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/3/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17816) Run optional CI for changes in C

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17816?focusedWorklogId=632778&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632778
 ]

ASF GitHub Bot logged work on HADOOP-17816:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 09:20
Start Date: 03/Aug/21 09:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-891683671


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 105m 10s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 203m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs compile cc javac golang |
   | uname | Linux dd906117a1fb 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b9b39bd33d17b6edd61ecfbfb362c26cb1368a4a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/3/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/3/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infras

[GitHub] [hadoop] hadoop-yetus commented on pull request #3166: HADOOP-17786. Parallelize stages in Jenkins

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3166:
URL: https://github.com/apache/hadoop/pull/3166#issuecomment-891697427


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  46m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  66m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  84m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 50s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed CTEST tests | test_test_libhdfs_ops_hdfs_static |
   |   | test_test_libhdfs_threaded_hdfs_static |
   |   | test_test_libhdfs_zerocopy_hdfs_static |
   |   | test_test_native_mini_dfs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3166 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux c9fc6c9774f1 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 26529d07f8139a392e246f7068dea98ad13a93fc |
   | Default Java | Red Hat, Inc.-1.8.0_302-b08 |
   | CTEST | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/testReport/ |
   | Max. process+thread count | 601 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17786) Parallelize stages in Jenkins

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17786?focusedWorklogId=632803&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632803
 ]

ASF GitHub Bot logged work on HADOOP-17786:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 09:40
Start Date: 03/Aug/21 09:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3166:
URL: https://github.com/apache/hadoop/pull/3166#issuecomment-891697427


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  46m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  66m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  84m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 50s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed CTEST tests | test_test_libhdfs_ops_hdfs_static |
   |   | test_test_libhdfs_threaded_hdfs_static |
   |   | test_test_libhdfs_zerocopy_hdfs_static |
   |   | test_test_native_mini_dfs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3166 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux c9fc6c9774f1 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 26529d07f8139a392e246f7068dea98ad13a93fc |
   | Default Java | Red Hat, Inc.-1.8.0_302-b08 |
   | CTEST | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/testReport/ |
   | Max. process+thread count | 601 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/6/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 632803)
Time Spent: 1h 10m  (was: 1h)

> Parallelize stages in Jenkins
> --

[GitHub] [hadoop] hadoop-yetus commented on pull request #3209: HDFS-16129. Fixing the signature secret file misusage in HttpFS.

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3209:
URL: https://github.com/apache/hadoop/pull/3209#issuecomment-891708421


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  root: The patch generated 
0 new + 97 unchanged - 2 fixed = 97 total (was 99)  |
   | +1 :green_heart: |  mvnsite  |   3m  1s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   1m 12s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/6/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 50s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 38s |  |  hadoop-kms in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   9m 14s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 216m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-httpfs |
   |  |  
org.apache.hadoop.fs.http.server.HttpFSAuthenticationFilter.CONF_PREFIXES 
should be package protected  At HttpFSAuthenticationFilter.java: At 
HttpFSAuthenticationFilter.java:[line 54] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3209 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 5703e8d2d241 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 56a259a893114a7ae8ef8fce4de45c239f605387 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/j

[GitHub] [hadoop] steveloughran commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-08-03 Thread GitBox


steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-891730763


   have I just broken everything?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=632837&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632837
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 10:28
Start Date: 03/Aug/21 10:28
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-891730763


   have I just broken everything?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 632837)
Time Spent: 2h 40m  (was: 2.5h)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17832) Replacing native lib with their Java wrappers

2021-08-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17392229#comment-17392229
 ] 

Steve Loughran commented on HADOOP-17832:
-

we'd love this for the shell stuff/winutils, which is only needed to give the 
illusion of posix-fs permissions on windows NTFS. Which, now that YARN doesn't 
run on windows any more *is not needed*

> Replacing native lib with their Java wrappers
> -
>
> Key: HADOOP-17832
> URL: https://issues.apache.org/jira/browse/HADOOP-17832
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>
> This is umbrella ticker covering all works for replacing native lib with 
> their Java wrappers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3217: HADOOP-17542. IPV6 support in Netutils#createSocketAddress

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3217:
URL: https://github.com/apache/hadoop/pull/3217#issuecomment-891737810


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HADOOP-17800 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  5s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  compile  |  22m  2s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 90 
unchanged - 0 fixed = 96 total (was 90)  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  9s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 183m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3217 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 072d6edd0152 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HADOOP-17800 / 419684cc162f2dfbf4045457263b1e91f6a068f8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/testReport/ |
   | Max. process+thread count | 2184 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is a

[jira] [Work logged] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17542?focusedWorklogId=632838&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632838
 ]

ASF GitHub Bot logged work on HADOOP-17542:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 10:39
Start Date: 03/Aug/21 10:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3217:
URL: https://github.com/apache/hadoop/pull/3217#issuecomment-891737810


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HADOOP-17800 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  5s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  compile  |  22m  2s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 6 new + 90 
unchanged - 0 fixed = 96 total (was 90)  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  9s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 183m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3217 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 072d6edd0152 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HADOOP-17800 / 419684cc162f2dfbf4045457263b1e91f6a068f8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/5/testReport/ |
   | Max. process+thread coun

[jira] [Created] (HADOOP-17833) createFile() under a magic path to skip all probes for file/dir at end of path

2021-08-03 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17833:
---

 Summary: createFile() under a magic path to skip all probes for 
file/dir at end of path
 Key: HADOOP-17833
 URL: https://issues.apache.org/jira/browse/HADOOP-17833
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.3.1
Reporter: Steve Loughran


Magic committer tasks can be slow because every file created with 
overwrite=false triggers a HEAD (verify there's no file) and a LIST (that 
there's no dir). And because of delayed manifestations, it may not behave as 
expected.

ParquetOutputFormat is one example of a library which does this.

we could fix parquet to use overwrite=true, but (a) there may be surprises in 
other uses (b) it'd still leave the list and (c) do nothing for other formats 
call

Proposed: createFile() under a magic path to skip all probes for file/dir at 
end of path

Only a single task attempt Will be writing to that directory and it should know 
what it is doing. If there is conflicting file names and parts across tasks 
that won't even get picked up at this point. Oh and none of the committers ever 
check for this: you'll get the last file manifested (s3a) or renamed (file)

If we skip the checks we will save 2 HTTP requests/file.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 commented on pull request #3248: YARN-10874. Refactor NM ContainerLaunch#getEnvDependencies's unit tests

2021-08-03 Thread GitBox


brumi1024 commented on pull request #3248:
URL: https://github.com/apache/hadoop/pull/3248#issuecomment-891751132


   Thanks @tomicooler for the patch. Readding my non-binding +1 from 
YARN-10355, and pinging @shuzirra and @szilard-nemeth to discuss whether 
including jupiter is ok, or not.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3217: HADOOP-17542. IPV6 support in Netutils#createSocketAddress

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3217:
URL: https://github.com/apache/hadoop/pull/3217#issuecomment-891752500


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HADOOP-17800 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m  5s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  compile  |  27m 37s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  22m 12s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 51s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  26m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  22m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 7 new + 90 
unchanged - 0 fixed = 97 total (was 90)  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 28s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3217 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux bae767e7cb81 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HADOOP-17800 / 1d62ab2842745bb81a806bd6fefecbe7361ab8c3 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/testReport/ |
   | Max. process+thread count | 1736 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This i

[jira] [Work logged] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17542?focusedWorklogId=632843&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632843
 ]

ASF GitHub Bot logged work on HADOOP-17542:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 11:03
Start Date: 03/Aug/21 11:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3217:
URL: https://github.com/apache/hadoop/pull/3217#issuecomment-891752500


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HADOOP-17800 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m  5s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  compile  |  27m 37s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  22m 12s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 51s |  |  HADOOP-17800 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  26m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  22m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 7 new + 90 
unchanged - 0 fixed = 97 total (was 90)  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 28s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3217 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux bae767e7cb81 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HADOOP-17800 / 1d62ab2842745bb81a806bd6fefecbe7361ab8c3 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3217/4/testReport/ |
   | Max. process+thread c

[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-891783059


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 345m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/24/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 439m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/24/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux be6f0ffbf5bd 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d5d2db768dae0f44972b249942e71030bd6dfc6f |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/24/testReport/ |
   | Max. process+thread count | 1992 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/24/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |

[GitHub] [hadoop] steveloughran closed pull request #3251: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-08-03 Thread GitBox


steveloughran closed pull request #3251:
URL: https://github.com/apache/hadoop/pull/3251


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3251: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-08-03 Thread GitBox


steveloughran commented on pull request #3251:
URL: https://github.com/apache/hadoop/pull/3251#issuecomment-891786277


   thanks; I did the same locally; good to see it worked on your setup too.
   
   Closing this as your original patch has already been cherrypicked in. 
   
   thank you for finding this and fixing it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=632858&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632858
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 11:57
Start Date: 03/Aug/21 11:57
Worklog Time Spent: 10m 
  Work Description: steveloughran closed pull request #3251:
URL: https://github.com/apache/hadoop/pull/3251


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 632858)
Time Spent: 4.5h  (was: 4h 20m)

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Assignee: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 3.3-branch-failsafe-report.html.gz, 
> failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=632859&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632859
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 11:57
Start Date: 03/Aug/21 11:57
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3251:
URL: https://github.com/apache/hadoop/pull/3251#issuecomment-891786277


   thanks; I did the same locally; good to see it worked on your setup too.
   
   Closing this as your original patch has already been cherrypicked in. 
   
   thank you for finding this and fixing it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 632859)
Time Spent: 4h 40m  (was: 4.5h)

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Assignee: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 3.3-branch-failsafe-report.html.gz, 
> failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-08-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17812:

Fix Version/s: 3.3.2

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Assignee: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
> Attachments: 3.3-branch-failsafe-report.html.gz, 
> failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-08-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17392259#comment-17392259
 ] 

Steve Loughran commented on HADOOP-17812:
-

As the unbuffer test says "this may be transient"; if the read buffer isn't 
full it will underreport. Intermittent -don't worry

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Assignee: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
> Attachments: 3.3-branch-failsafe-report.html.gz, 
> failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3209: HDFS-16129. Fixing the signature secret file misusage in HttpFS.

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3209:
URL: https://github.com/apache/hadoop/pull/3209#issuecomment-891795900


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 31s |  |  root: The patch generated 
0 new + 97 unchanged - 2 fixed = 97 total (was 99)  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   1m 16s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/7/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 19s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 42s |  |  hadoop-kms in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   9m 43s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 219m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-httpfs |
   |  |  
org.apache.hadoop.fs.http.server.HttpFSAuthenticationFilter.CONF_PREFIXES 
should be package protected  At HttpFSAuthenticationFilter.java: At 
HttpFSAuthenticationFilter.java:[line 54] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3209 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 69341df2e869 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5f58bbd332c6d259e971700979bc164adc8952f3 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/j

[GitHub] [hadoop] jianghuazhu commented on pull request #3256: HDFS-16151.Improve the parameter comments related to ProtobufRpcEngine2#Server().

2021-08-03 Thread GitBox


jianghuazhu commented on pull request #3256:
URL: https://github.com/apache/hadoop/pull/3256#issuecomment-891804326


   It seems that some exceptions occurred during the UT process, resulting in 
the UT not being executed.
   Consider re-executing UT.
   Then we look at the final execution result.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #766:
URL: https://github.com/apache/hadoop/pull/766#issuecomment-891872604


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 21s |  |  
https://github.com/apache/hadoop/pull/766 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/766 |
   | JIRA Issue | YARN-9509 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-766/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth opened a new pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-03 Thread GitBox


szilard-nemeth opened a new pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-15327:

Labels: pull-request-available  (was: )

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=632941&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632941
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 14:06
Start Date: 03/Aug/21 14:06
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth opened a new pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 632941)
Remaining Estimate: 0h
Time Spent: 10m

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on pull request #3248: YARN-10874. Refactor NM ContainerLaunch#getEnvDependencies's unit tests

2021-08-03 Thread GitBox


szilard-nemeth commented on pull request #3248:
URL: https://github.com/apache/hadoop/pull/3248#issuecomment-891886674


   Thanks @tomicooler for working on this.
   Nice patch, made the test way more easy to read.
   I think it's okay to add the jupiter dependency as it was already imported 
to hadoop-yarn-api project as well.
   +1, committed to trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth merged pull request #3248: YARN-10874. Refactor NM ContainerLaunch#getEnvDependencies's unit tests

2021-08-03 Thread GitBox


szilard-nemeth merged pull request #3248:
URL: https://github.com/apache/hadoop/pull/3248


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #3247: HDFS-16146. All three replicas are lost due to not adding a new DataN…

2021-08-03 Thread GitBox


jojochuang commented on pull request #3247:
URL: https://github.com/apache/hadoop/pull/3247#issuecomment-891897722


   looks good. @Hexiaoqiao please go ahead and merge it. thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 commented on pull request #3220: YARN-10355. Refactor NM ContainerLaunch.java#orderEnvByDependencies

2021-08-03 Thread GitBox


brumi1024 commented on pull request #3220:
URL: https://github.com/apache/hadoop/pull/3220#issuecomment-891914615


   @tomicooler checked the latest state, other than the two small checkstyle 
issues +1 from my side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3209: HDFS-16129. Fixing the signature secret file misusage in HttpFS.

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3209:
URL: https://github.com/apache/hadoop/pull/3209#issuecomment-891916492


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 32s |  |  root: The patch generated 
0 new + 97 unchanged - 2 fixed = 97 total (was 99)  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 52s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 44s |  |  hadoop-kms in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   8m 51s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 216m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3209 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux b21da6338d4e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2c497078f45b93822a9b7e2e0c28278d1b585370 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/8/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms hadoop-hdfs-project/hadoop-hdfs-httpfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/8/console |
   | versions | git=2.25.1 maven=3.

[GitHub] [hadoop] Hexiaoqiao merged pull request #3247: HDFS-16146. All three replicas are lost due to not adding a new DataN…

2021-08-03 Thread GitBox


Hexiaoqiao merged pull request #3247:
URL: https://github.com/apache/hadoop/pull/3247


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #3247: HDFS-16146. All three replicas are lost due to not adding a new DataN…

2021-08-03 Thread GitBox


Hexiaoqiao commented on pull request #3247:
URL: https://github.com/apache/hadoop/pull/3247#issuecomment-891986382


   Committed to trunk. Thanks @zhangshuyan0 for your works. Thanks @jojochuang 
for your reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer

2021-08-03 Thread GitBox


hadoop-yetus removed a comment on pull request #2971:
URL: https://github.com/apache/hadoop/pull/2971#issuecomment-888535151


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 23 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  14m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 34s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 4 new + 1912 unchanged - 1 
fixed = 1916 total (was 1913)  |
   | +1 :green_heart: |  compile  |  18m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  18m 47s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 4 new + 1788 
unchanged - 1 fixed = 1792 total (was 1789)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/blanks-eol.txt)
 |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 45s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 44 new + 0 unchanged - 0 fixed = 44 total (was 0) 
 |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  9s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   1m 45s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/26/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 36s |  |  hadoop-pro

[jira] [Assigned] (HADOOP-17715) ABFS: Append blob tests with non HNS accounts fail

2021-08-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-17715:
---

Assignee: Sneha Varma

> ABFS: Append blob tests with non HNS accounts fail
> --
>
> Key: HADOOP-17715
> URL: https://issues.apache.org/jira/browse/HADOOP-17715
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Append blob tests with non HNS accounts fail.
>  # The script to run the tests should ensure that append blob tests with non 
> HNS account don't execute
>  # Should have proper documentation mentioning that append blob is allowed 
> only for HNS accounts



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bogthe opened a new pull request #3260: HADOOP-17198 Support S3 AccessPoint

2021-08-03 Thread GitBox


bogthe opened a new pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260


   [HADOOP-17198](https://issues.apache.org/jira/browse/HADOOP-17198)
   
   This change aims to add support for S3 AccessPoints. To use S3 object level
   APIs for an AccessPoint, one has to use the AccessPoint (AP) ARN.
   
   Hence the following have been added:
   - a new property to set the AccessPoint ARN;
   - S3a parsing and using the new property with appropriate exceptions;
   - initial documentation update for AccessPoints;
   
   What this PR enables:
   - If `apname` is the name of an AccessPoint you have for created bucket then 
S3a now allows you to use paths like `s3a://apname/` IF the new 
`s3a.accesspoint.arn` is set to the AccessPoint ARN e.g. 
`arn:aws:s3:eu-west-1:123456789101:accesspoint/apname`;
   
   There's one thing I'm not sure about with this initial implementation so am 
looking for feedback if and how I should tackle it:
   
   `S3a` bucket now has 2 "meanings" it can be a bucket name or an Access Point 
ARN. From the point of view of interacting with the SDK, they are 
interchangeable and internal string parsing logic is used to create the request 
for the right endpoint. However, I think it would be nicer to have a clearer 
abstraction for bucket names or access point ARNs that S3a operations can work 
with. This abstraction comes with the cost of doing a refactor which I'm not 
sure it's worth it right now. Even by doing a quick search on `.getHost()` 
there's quite a few places where the bucket name is deduced from the `host`.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17198) Support S3 Access Points

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17198?focusedWorklogId=633057&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633057
 ]

ASF GitHub Bot logged work on HADOOP-17198:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 17:07
Start Date: 03/Aug/21 17:07
Worklog Time Spent: 10m 
  Work Description: bogthe opened a new pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260


   [HADOOP-17198](https://issues.apache.org/jira/browse/HADOOP-17198)
   
   This change aims to add support for S3 AccessPoints. To use S3 object level
   APIs for an AccessPoint, one has to use the AccessPoint (AP) ARN.
   
   Hence the following have been added:
   - a new property to set the AccessPoint ARN;
   - S3a parsing and using the new property with appropriate exceptions;
   - initial documentation update for AccessPoints;
   
   What this PR enables:
   - If `apname` is the name of an AccessPoint you have for created bucket then 
S3a now allows you to use paths like `s3a://apname/` IF the new 
`s3a.accesspoint.arn` is set to the AccessPoint ARN e.g. 
`arn:aws:s3:eu-west-1:123456789101:accesspoint/apname`;
   
   There's one thing I'm not sure about with this initial implementation so am 
looking for feedback if and how I should tackle it:
   
   `S3a` bucket now has 2 "meanings" it can be a bucket name or an Access Point 
ARN. From the point of view of interacting with the SDK, they are 
interchangeable and internal string parsing logic is used to create the request 
for the right endpoint. However, I think it would be nicer to have a clearer 
abstraction for bucket names or access point ARNs that S3a operations can work 
with. This abstraction comes with the cost of doing a refactor which I'm not 
sure it's worth it right now. Even by doing a quick search on `.getHost()` 
there's quite a few places where the bucket name is deduced from the `host`.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633057)
Remaining Estimate: 0h
Time Spent: 10m

> Support S3 Access Points
> 
>
> Key: HADOOP-17198
> URL: https://issues.apache.org/jira/browse/HADOOP-17198
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Bogdan Stolojan
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Improve VPC integration by supporting access points for buckets
> https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html
> Not sure how to do this *at all*; 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17198) Support S3 Access Points

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17198:

Labels: pull-request-available  (was: )

> Support S3 Access Points
> 
>
> Key: HADOOP-17198
> URL: https://issues.apache.org/jira/browse/HADOOP-17198
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Bogdan Stolojan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Improve VPC integration by supporting access points for buckets
> https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html
> Not sure how to do this *at all*; 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bogthe commented on pull request #3260: HADOOP-17198 Support S3 AccessPoint

2021-08-03 Thread GitBox


bogthe commented on pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260#issuecomment-892035048


   Turns out merge with `trunk` broke some tests. Fixing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17198) Support S3 Access Points

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17198?focusedWorklogId=633073&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633073
 ]

ASF GitHub Bot logged work on HADOOP-17198:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 17:39
Start Date: 03/Aug/21 17:39
Worklog Time Spent: 10m 
  Work Description: bogthe commented on pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260#issuecomment-892035048


   Turns out merge with `trunk` broke some tests. Fixing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633073)
Time Spent: 20m  (was: 10m)

> Support S3 Access Points
> 
>
> Key: HADOOP-17198
> URL: https://issues.apache.org/jira/browse/HADOOP-17198
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Bogdan Stolojan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Improve VPC integration by supporting access points for buckets
> https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html
> Not sure how to do this *at all*; 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bogthe commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-08-03 Thread GitBox


bogthe commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-892035817


   It is not you, it is we! So did anything break? I ran re-ran the tests on a 
PR up to date with trunk and nothing suspicious broke.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=633074&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633074
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 17:40
Start Date: 03/Aug/21 17:40
Worklog Time Spent: 10m 
  Work Description: bogthe commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-892035817


   It is not you, it is we! So did anything break? I ran re-ran the tests on a 
PR up to date with trunk and nothing suspicious broke.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633074)
Time Spent: 2h 50m  (was: 2h 40m)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17834) Bump aliyun-sdk-oss to 2.0.6

2021-08-03 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-17834:
---

 Summary: Bump aliyun-sdk-oss to 2.0.6
 Key: HADOOP-17834
 URL: https://issues.apache.org/jira/browse/HADOOP-17834
 Project: Hadoop Common
  Issue Type: Task
Reporter: Siyao Meng
Assignee: Siyao Meng


Bump aliyun-sdk-oss to 2.0.6 in order to remove jdom 1.1 dependency.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17820) Remove dependency on jdom

2021-08-03 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17392460#comment-17392460
 ] 

Siyao Meng commented on HADOOP-17820:
-

Thanks [~aajisaka]. I have filed HADOOP-17834.

> Remove dependency on jdom
> -
>
> Key: HADOOP-17820
> URL: https://issues.apache.org/jira/browse/HADOOP-17820
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> It doesn't seem that jdom is referenced anywhere in the code base now, yet it 
> exists in the distribution.
> {code}
> $ find . -name "*jdom*.jar"
> ./hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/jdom-1.1.jar
> {code}
> There is recently 
> [CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] issued for 
> jdom. Let's remove the binary from the dist if not useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Description: 
Bump aliyun-sdk-oss to 3.13.0 in order to remove transient dependency on jdom 
1.1.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.

  was:
Bump aliyun-sdk-oss to 2.0.6 in order to remove jdom 1.1 dependency.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.


> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transient dependency on jdom 
> 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Summary: Bump aliyun-sdk-oss to 3.13.0  (was: Bump aliyun-sdk-oss to 2.0.6)

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Bump aliyun-sdk-oss to 2.0.6 in order to remove jdom 1.1 dependency.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Description: 
Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on jdom 
1.1.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.

  was:
Bump aliyun-sdk-oss to 3.13.0 in order to remove transient dependency on jdom 
1.1.

Ref: 
https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.


> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3260: HADOOP-17198 Support S3 AccessPoint

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260#issuecomment-892075370


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  17m 49s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3260/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 11 new + 19 unchanged - 0 
fixed = 30 total (was 19)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3260/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 
with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 6 new + 
63 unchanged - 0 fixed = 69 total (was 63)  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 55s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  84m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3260/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux c83ad5e48ac2 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3a645a82bd2051b93bc2295d8be351bc6f775c78 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0

[jira] [Work logged] (HADOOP-17198) Support S3 Access Points

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17198?focusedWorklogId=633097&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633097
 ]

ASF GitHub Bot logged work on HADOOP-17198:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 18:39
Start Date: 03/Aug/21 18:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3260:
URL: https://github.com/apache/hadoop/pull/3260#issuecomment-892075370


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  17m 49s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3260/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 11 new + 19 unchanged - 0 
fixed = 30 total (was 19)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3260/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 
with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 6 new + 
63 unchanged - 0 fixed = 69 total (was 63)  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 55s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  84m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3260/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdown

[GitHub] [hadoop] sunchao commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r681974106



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipDecompressor.java
##
@@ -34,13 +34,13 @@
  */
 @DoNotPool
 public class BuiltInGzipDecompressor implements Decompressor {
-  private static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int
-  private static final int GZIP_DEFLATE_METHOD = 8;
-  private static final int GZIP_FLAGBIT_HEADER_CRC  = 0x02;
-  private static final int GZIP_FLAGBIT_EXTRA_FIELD = 0x04;
-  private static final int GZIP_FLAGBIT_FILENAME= 0x08;
-  private static final int GZIP_FLAGBIT_COMMENT = 0x10;
-  private static final int GZIP_FLAGBITS_RESERVED   = 0xe0;
+  public static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int

Review comment:
   do we need to make these public?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();

Review comment:
   make this final?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+privat

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633140&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633140
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 19:50
Start Date: 03/Aug/21 19:50
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r681974106



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipDecompressor.java
##
@@ -34,13 +34,13 @@
  */
 @DoNotPool
 public class BuiltInGzipDecompressor implements Decompressor {
-  private static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int
-  private static final int GZIP_DEFLATE_METHOD = 8;
-  private static final int GZIP_FLAGBIT_HEADER_CRC  = 0x02;
-  private static final int GZIP_FLAGBIT_EXTRA_FIELD = 0x04;
-  private static final int GZIP_FLAGBIT_FILENAME= 0x08;
-  private static final int GZIP_FLAGBIT_COMMENT = 0x10;
-  private static final int GZIP_FLAGBITS_RESERVED   = 0xe0;
+  public static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int

Review comment:
   do we need to make these public?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();

Review comment:
   make this final?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzi

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682057640



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipDecompressor.java
##
@@ -34,13 +34,13 @@
  */
 @DoNotPool
 public class BuiltInGzipDecompressor implements Decompressor {
-  private static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int
-  private static final int GZIP_DEFLATE_METHOD = 8;
-  private static final int GZIP_FLAGBIT_HEADER_CRC  = 0x02;
-  private static final int GZIP_FLAGBIT_EXTRA_FIELD = 0x04;
-  private static final int GZIP_FLAGBIT_FILENAME= 0x08;
-  private static final int GZIP_FLAGBIT_COMMENT = 0x10;
-  private static final int GZIP_FLAGBITS_RESERVED   = 0xe0;
+  public static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int

Review comment:
   Some are not, probably I did it during development but forgot to revert.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633142&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633142
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 19:54
Start Date: 03/Aug/21 19:54
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682057640



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipDecompressor.java
##
@@ -34,13 +34,13 @@
  */
 @DoNotPool
 public class BuiltInGzipDecompressor implements Decompressor {
-  private static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int
-  private static final int GZIP_DEFLATE_METHOD = 8;
-  private static final int GZIP_FLAGBIT_HEADER_CRC  = 0x02;
-  private static final int GZIP_FLAGBIT_EXTRA_FIELD = 0x04;
-  private static final int GZIP_FLAGBIT_FILENAME= 0x08;
-  private static final int GZIP_FLAGBIT_COMMENT = 0x10;
-  private static final int GZIP_FLAGBITS_RESERVED   = 0xe0;
+  public static final int GZIP_MAGIC_ID = 0x8b1f;  // if read as LE short int

Review comment:
   Some are not, probably I did it during development but forgot to revert.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633142)
Time Spent: 3h  (was: 2h 50m)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Liang-Chi Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682059830



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   Need to reserve for the trailer. Otherwise if deflater reported finished 
status, we have not cha

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633143&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633143
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 19:57
Start Date: 03/Aug/21 19:57
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682059830



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[GitHub] [hadoop] smengcl opened a new pull request #3261: HADOOP-17834. Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread GitBox


smengcl opened a new pull request #3261:
URL: https://github.com/apache/hadoop/pull/3261


   See description of https://issues.apache.org/jira/browse/HADOOP-17834.
   
   Confirmed that with this change, jdom 1.1 is gone from the dependency tree:
   
   ```
   $ mvn dependency:tree | grep jdom  
   [INFO] |  +- org.jdom:jdom2:jar:2.0.6:provided
   [INFO] |  +- org.jdom:jdom2:jar:2.0.6:compile
   [INFO] | +- org.jdom:jdom2:jar:2.0.6:compile
   [INFO] | +- org.jdom:jdom2:jar:2.0.6:compile
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?focusedWorklogId=633145&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633145
 ]

ASF GitHub Bot logged work on HADOOP-17834:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 20:10
Start Date: 03/Aug/21 20:10
Worklog Time Spent: 10m 
  Work Description: smengcl opened a new pull request #3261:
URL: https://github.com/apache/hadoop/pull/3261


   See description of https://issues.apache.org/jira/browse/HADOOP-17834.
   
   Confirmed that with this change, jdom 1.1 is gone from the dependency tree:
   
   ```
   $ mvn dependency:tree | grep jdom  
   [INFO] |  +- org.jdom:jdom2:jar:2.0.6:provided
   [INFO] |  +- org.jdom:jdom2:jar:2.0.6:compile
   [INFO] | +- org.jdom:jdom2:jar:2.0.6:compile
   [INFO] | +- org.jdom:jdom2:jar:2.0.6:compile
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633145)
Remaining Estimate: 0h
Time Spent: 10m

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17834:

Labels: pull-request-available  (was: )

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-03 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17834:

Status: Patch Available  (was: Open)

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-08-03 Thread GitBox


steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-892139643


   ok. If it was a full breakage someone would have noticed, rolled back, 
re-opened, emailed me etc. It does happen from time to time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=633149&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633149
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 20:22
Start Date: 03/Aug/21 20:22
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-892139643


   ok. If it was a full breakage someone would have noticed, rolled back, 
re-opened, emailed me etc. It does happen from time to time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633149)
Time Spent: 3h  (was: 2h 50m)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3238: HADOOP-17816. Run optional CI for changes in C

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-892179375


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  39m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  48m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 113m 33s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 220m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs compile cc javac golang |
   | uname | Linux 049441a0949b 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 912909ec5fa0cd77bcb3e5b285117e7c40d9f9ae |
   | Default Java | Red Hat, Inc.-1.8.0_292-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/testReport/ |
   | Max. process+thread count | 714 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17816) Run optional CI for changes in C

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17816?focusedWorklogId=633174&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633174
 ]

ASF GitHub Bot logged work on HADOOP-17816:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 21:30
Start Date: 03/Aug/21 21:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-892179375


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  39m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  48m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 113m 33s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 220m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs compile cc javac golang |
   | uname | Linux 049441a0949b 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 912909ec5fa0cd77bcb3e5b285117e7c40d9f9ae |
   | Default Java | Red Hat, Inc.-1.8.0_292-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/testReport/ |
   | Max. process+thread count | 714 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633174)
Time Spent: 1h 10m  (was: 1h)

> Run optional CI for changes in C
> 
>
> Key: HADOOP-17816
> URL: https://issues.apache.org/jira/browse/HADOOP-17816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We need to ensure that we run the CI for all the platforms when there are 
> changes in C files.



--
This message was sent

[GitHub] [hadoop] hadoop-yetus commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #2845:
URL: https://github.com/apache/hadoop/pull/2845#issuecomment-892181247


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 21s |  |  
https://github.com/apache/hadoop/pull/2845 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2845 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=633177&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633177
 ]

ASF GitHub Bot logged work on HADOOP-17618:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 21:33
Start Date: 03/Aug/21 21:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2845:
URL: https://github.com/apache/hadoop/pull/2845#issuecomment-892181247


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 21s |  |  
https://github.com/apache/hadoop/pull/2845 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2845 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633177)
Time Spent: 7.5h  (was: 7h 20m)

> ABFS: Partially obfuscate SAS object IDs in Logs
> 
>
> Key: HADOOP-17618
> URL: https://issues.apache.org/jira/browse/HADOOP-17618
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> Delegation SAS tokens are created using various parameters for specifying 
> details such as permissions and validity. The requests are logged, along with 
> values of all the query parameters. This change will partially mask values 
> logged for the following object IDs representing the security principal: 
> skoid, saoid, suoid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682121842



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);
+
+numAvailBytes += deflated;
+off += deflated;
+len -= deflated;
+
+// A

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633183&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633183
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 21:42
Start Date: 03/Aug/21 21:42
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682121842



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682122354



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);

Review comment:
   Oh, looks like I didn't set ident well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682122354



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);

Review comment:
   Oh, looks like I didn't set 2 space ident for hadoop project.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633184&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633184
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 21:43
Start Date: 03/Aug/21 21:43
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682122354



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);

Review comment:
   Oh, looks like I didn't set ident well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633184)
Time Spent: 3.5h  (was: 3h 20m)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633185&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633185
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 21:43
Start Date: 03/Aug/21 21:43
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682122354



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);

Review comment:
   Oh, looks like I didn't set 2 space ident for hadoop project.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633185)
Time Spent: 3h 40m  (was: 3.5h)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682128146



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   hmm what if `len < 8`, will it run into error?




-- 
This is an automated message from the Apa

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633188&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633188
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 21:55
Start Date: 03/Aug/21 21:55
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682128146



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBu

[jira] [Commented] (HADOOP-16761) KMSClientProvider does not work with client using ticket logged in externally

2021-08-03 Thread Vipin Rathor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17392563#comment-17392563
 ] 

Vipin Rathor commented on HADOOP-16761:
---

Hello,

Thank you [~xyao] for filing this. I faced a similar issue while StreamSets was 
trying to write to the HDFS encrypted zone. For now, the workaround in 
StreamSets was to use CDH 5.16.0 as a stage library. For a permanent fix, any 
outlook on when this can be fixed?

Thanks.

CC: [~arpaga]

> KMSClientProvider does not work with client using ticket logged in externally 
> --
>
> Key: HADOOP-16761
> URL: https://issues.apache.org/jira/browse/HADOOP-16761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> This is a regression from HDFS-13682 that checks not only the kerberos 
> credential but also enforce the login is non-external. This breaks client 
> applications that need to access HDFS encrypted file using kerberos ticket 
> that logged in external in ticket cache. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682130996



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   Tried with NO_FLUSH locally and looks passing tests. Let's see Hadoop CI.




-- 
This is an auto

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633191&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633191
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 22:00
Start Date: 03/Aug/21 22:00
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682130996



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682132872



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   > hmm what if `len < 8`, will it run into error?
   
   If buffer size < 0, `deflate` will throw 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633194&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633194
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 22:04
Start Date: 03/Aug/21 22:04
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682132872



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[jira] [Updated] (HADOOP-17189) add way for s3a to recognise buckets with "." in name and switch to path access

2021-08-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17189:

Description: 
# AWS has, historically, allowed buckets with '.' in their name (along with 
other non-DNS valid chars)
# none of which work with virtual hostname S3 clients -you have to enable path 
style access
# which we can't do on a per-bucket basis, as the logic there doesn't support 
buckets with '.' in the name (think about it...)
# and we can't blindly say "use path access everywhere", because all buckets 
created on/after 2020-10-01 won't work that way

  was:
# AWS has, historically, allowed buckets with '.' in their name (along with 
other non-DNS valid chars)
# none of which work with virtual hostname S3 clients -you have to enable path 
style access
# which we can't do on a per-bucket basis, as the logic there doesn't support 
buckets with '.' in the name (think about it...)
# and we can't blindly say "use path access everywhere", because all buckets 
created on/after 2020-10-01 won't work that way  
dest.set(Math.max(dest.get(), sourceValue));



> add way for s3a to recognise buckets with "." in name and switch to path 
> access
> ---
>
> Key: HADOOP-17189
> URL: https://issues.apache.org/jira/browse/HADOOP-17189
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 3.4.0
>
>
> # AWS has, historically, allowed buckets with '.' in their name (along with 
> other non-DNS valid chars)
> # none of which work with virtual hostname S3 clients -you have to enable 
> path style access
> # which we can't do on a per-bucket basis, as the logic there doesn't support 
> buckets with '.' in the name (think about it...)
> # and we can't blindly say "use path access everywhere", because all buckets 
> created on/after 2020-10-01 won't work that way



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#issuecomment-892210439


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 30s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  21m 59s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1931 unchanged - 0 
fixed = 1932 total (was 1931)  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 22s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 1806 
unchanged - 0 fixed = 1807 total (was 1806)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/blanks-eol.txt)
 |  The patch has 26 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m  4s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 85 new + 135 unchanged - 9 fixed = 220 total (was 
144)  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 27s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   4m 18s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html)
 |  hadoop-mapreduce-project/hadoop-mapreduce-client generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   1m 41s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=633206&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633206
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 22:34
Start Date: 03/Aug/21 22:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#issuecomment-892210439


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 30s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  21m 59s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1931 unchanged - 0 
fixed = 1932 total (was 1931)  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 22s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 1806 
unchanged - 0 fixed = 1807 total (was 1806)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/blanks-eol.txt)
 |  The patch has 26 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m  4s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 85 new + 135 unchanged - 9 fixed = 220 total (was 
144)  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 27s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   4m 18s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3259/1/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.html)
 |  hadoop-mapreduce-project/hadoop-mapreduce-client generated 1 new + 0 

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682162450



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   Hmm, you remind me that in edge case, e.g. the buffer size is less than 
8, we still need to be a

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633219&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633219
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 23:17
Start Date: 03/Aug/21 23:17
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682162450



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[GitHub] [hadoop] sunchao commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682166815



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   Yea, I think you also need to count the header size so it could happen 
when the buffer size >= 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633223&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633223
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 23:29
Start Date: 03/Aug/21 23:29
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682166815



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBu

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682171003



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);

Review comment:
   I updated the way to output the trailer. It incrementally outputs the 
trailer like the header no

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633229&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633229
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 03/Aug/21 23:41
Start Date: 03/Aug/21 23:41
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682171003



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682186231



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);
+
+numAvailBytes += deflated;
+off += deflated;
+len -= deflated;
+
+// A

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633241&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633241
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 04/Aug/21 00:25
Start Date: 04/Aug/21 00:25
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682186231



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682204599



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.

Review comment:
   fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-03 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682204895



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBufLen);  // CRC-32 is on 
uncompressed data
+
+currentInputLen = userBufLen;
+userBufOff += userBufLen;
+userBufLen = 0;
+}
+
+
+// now compress it into b[]
+int deflated = deflater.deflate(b, off, len - 8, 
Deflater.FULL_FLUSH);
+
+numAvailBytes += deflated;
+off += deflated;
+len -= deflated;
+
+// A

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633245&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633245
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 04/Aug/21 00:47
Start Date: 04/Aug/21 00:47
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682204895



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+if (userBufLen <= 0) {
+return numAvailBytes;
+}
+
+int outputHeaderSize = writeHeader(b, off, len);
+
+// Completes header output.
+if (headerOff == 10) {
+state = BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM;
+}
+
+numAvailBytes += outputHeaderSize;
+
+if (outputHeaderSize == len) {
+return numAvailBytes;
+}
+
+off += outputHeaderSize;
+len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+// hand off user data (or what's left of it) to Deflater--but note 
that
+// Deflater may not have consumed all of previous bufferload, in 
which case
+// userBufLen will be zero
+if (userBufLen > 0) {
+deflater.setInput(userBuf, userBufOff, userBufLen);
+
+crc.update(userBuf, userBufOff, userBuf

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=633244&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633244
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 04/Aug/21 00:47
Start Date: 04/Aug/21 00:47
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r682204599



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+/**
+ * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+ * details.
+ */
+private static final byte[] GZIP_HEADER = new byte[] {
+0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 
};
+
+private Deflater deflater;
+
+private int headerOff = 0;
+
+private byte[] userBuf = null;
+private int userBufOff = 0;
+private int userBufLen = 0;
+
+private int headerBytesWritten = 0;
+private int trailerBytesWritten = 0;
+
+private int currentInputLen = 0;
+
+private Checksum crc = DataChecksum.newCrc32();
+
+private BuiltInGzipDecompressor.GzipStateLabel state;
+
+public BuiltInGzipCompressor(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 
ZlibFactory.getCompressionStrategy(conf);
+
+// 'true' (nowrap) => Deflater will handle raw deflate stream only
+deflater = new Deflater(level.compressionLevel(), true);
+deflater.setStrategy(strategy.compressionStrategy());
+
+state = BuiltInGzipDecompressor.GzipStateLabel.HEADER_BASIC;
+crc.reset();
+}
+
+@Override
+public boolean finished() {
+return deflater.finished();
+}
+
+@Override
+public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+return deflater.needsInput();
+}
+
+return (state != BuiltInGzipDecompressor.GzipStateLabel.FINISHED);
+}
+
+@Override
+public int compress(byte[] b, int off, int len) throws IOException {
+int numAvailBytes = 0;
+
+// If we are not within uncompression data yet. Output the header.

Review comment:
   fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633244)
Time Spent: 5h  (was: 4h 50m)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work

[GitHub] [hadoop] hadoop-yetus commented on pull request #3238: HADOOP-17816. Run optional CI for changes in C

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-892275211


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  27m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 118m 54s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 204m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs compile cc javac golang |
   | uname | Linux 1449f3556c54 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 912909ec5fa0cd77bcb3e5b285117e7c40d9f9ae |
   | Default Java | Red Hat, Inc.-1.8.0_302-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/testReport/ |
   | Max. process+thread count | 638 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17816) Run optional CI for changes in C

2021-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17816?focusedWorklogId=633248&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633248
 ]

ASF GitHub Bot logged work on HADOOP-17816:
---

Author: ASF GitHub Bot
Created on: 04/Aug/21 00:55
Start Date: 04/Aug/21 00:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-892275211


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  27m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 118m 54s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 204m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs compile cc javac golang |
   | uname | Linux 1449f3556c54 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 912909ec5fa0cd77bcb3e5b285117e7c40d9f9ae |
   | Default Java | Red Hat, Inc.-1.8.0_302-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/testReport/ |
   | Max. process+thread count | 638 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 633248)
Time Spent: 1h 20m  (was: 1h 10m)

> Run optional CI for changes in C
> 
>
> Key: HADOOP-17816
> URL: https://issues.apache.org/jira/browse/HADOOP-17816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We need to ensure that we run the CI for all the platforms when there are 
> changes in C files.



--
This message was

[GitHub] [hadoop] hadoop-yetus commented on pull request #3238: HADOOP-17816. Run optional CI for changes in C

2021-08-03 Thread GitBox


hadoop-yetus commented on pull request #3238:
URL: https://github.com/apache/hadoop/pull/3238#issuecomment-892279604


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |  10m  0s |  |  Docker failed to build 
yetus/hadoop:ef5dbc7283a.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/3238 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3238/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >