[jira] [Resolved] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-17840.
---
Fix Version/s: 3.2.3
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to branch-3.2.3...[~bbeaudreault] thanks for your contribution.

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.2.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17840?focusedWorklogId=635543=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635543
 ]

ASF GitHub Bot logged work on HADOOP-17840:
---

Author: ASF GitHub Bot
Created on: 07/Aug/21 04:19
Start Date: 07/Aug/21 04:19
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula merged pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635543)
Time Spent: 50m  (was: 40m)

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula merged pull request #3275: HADOOP-17840: Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread GitBox


brahmareddybattula merged pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3271: HDFS-16155: Allow configurable exponential backoff in DFSInputStream refetchLocations

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3271:
URL: https://github.com/apache/hadoop/pull/3271#issuecomment-894599908


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 337m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3271/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 476m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3271/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3271 |
   | JIRA Issue | HDFS-16155 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 686f6ed0e729 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 400b7995fea10fb926c825e529c6b23142c4619a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3271/4/testReport/ |
   | Max. process+thread count | 1900 (vs. ulimit of 5500) 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635540=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635540
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 07/Aug/21 02:54
Start Date: 07/Aug/21 02:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894594938


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1a509ee1f77e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e0bec4a7b2a5df7a1148e8b1023893dd6ff50ec6 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/21/testReport/ |
   | Max. process+thread count | 3150 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/21/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894594938


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1a509ee1f77e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e0bec4a7b2a5df7a1148e8b1023893dd6ff50ec6 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/21/testReport/ |
   | Max. process+thread count | 3150 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/21/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] aajisaka merged pull request #3266: HADOOP-17835. Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread GitBox


aajisaka merged pull request #3266:
URL: https://github.com/apache/hadoop/pull/3266


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17835?focusedWorklogId=635539=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635539
 ]

ASF GitHub Bot logged work on HADOOP-17835:
---

Author: ASF GitHub Bot
Created on: 07/Aug/21 02:20
Start Date: 07/Aug/21 02:20
Worklog Time Spent: 10m 
  Work Description: aajisaka merged pull request #3266:
URL: https://github.com/apache/hadoop/pull/3266


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635539)
Time Spent: 3h 20m  (was: 3h 10m)

> Use CuratorCache implementation instead of PathChildrenCache / TreeCache
> 
>
> Key: HADOOP-17835
> URL: https://issues.apache.org/jira/browse/HADOOP-17835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
> CuratorCache service implementation in place of deprecated PathChildrenCache 
> and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3266: HADOOP-17835. Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread GitBox


aajisaka commented on pull request #3266:
URL: https://github.com/apache/hadoop/pull/3266#issuecomment-894591866


   Thank you @virajjasani for your contribution and thanks @eolivelli and 
@Randgalt for your  review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17835:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Use CuratorCache implementation instead of PathChildrenCache / TreeCache
> 
>
> Key: HADOOP-17835
> URL: https://issues.apache.org/jira/browse/HADOOP-17835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
> CuratorCache service implementation in place of deprecated PathChildrenCache 
> and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17835?focusedWorklogId=635537=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635537
 ]

ASF GitHub Bot logged work on HADOOP-17835:
---

Author: ASF GitHub Bot
Created on: 07/Aug/21 02:20
Start Date: 07/Aug/21 02:20
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #3266:
URL: https://github.com/apache/hadoop/pull/3266#issuecomment-894591866


   Thank you @virajjasani for your contribution and thanks @eolivelli and 
@Randgalt for your  review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635537)
Time Spent: 3h 10m  (was: 3h)

> Use CuratorCache implementation instead of PathChildrenCache / TreeCache
> 
>
> Key: HADOOP-17835
> URL: https://issues.apache.org/jira/browse/HADOOP-17835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
> CuratorCache service implementation in place of deprecated PathChildrenCache 
> and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3271: HDFS-16155: Allow configurable exponential backoff in DFSInputStream refetchLocations

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3271:
URL: https://github.com/apache/hadoop/pull/3271#issuecomment-894586818


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   4m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 232m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 346m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3271/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3271 |
   | JIRA Issue | HDFS-16155 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 35d96c3aa1d5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 400b7995fea10fb926c825e529c6b23142c4619a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3271/3/testReport/ |
   | Max. process+thread count | 2825 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3271/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635513=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635513
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 23:34
Start Date: 06/Aug/21 23:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894566377


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 20s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 180m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 5f58db499a75 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 939b349d899203203998e0ed6ba8125366ec0a5a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/20/testReport/ |
   | Max. process+thread count | 1262 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/20/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894566377


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 20s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 180m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 5f58db499a75 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 939b349d899203203998e0ed6ba8125366ec0a5a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/20/testReport/ |
   | Max. process+thread count | 1262 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/20/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635477=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635477
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 20:22
Start Date: 06/Aug/21 20:22
Worklog Time Spent: 10m 
  Work Description: viirya commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894499414


   > @viirya there are a few style issues in 
[here](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/19/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 - could you fix them?
   
   Oh, ok. Let me fix them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635477)
Time Spent: 16h  (was: 15h 50m)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 16h
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635476=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635476
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 20:22
Start Date: 06/Aug/21 20:22
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r684484896



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[]{
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+  // The trailer will be overwritten based on crc and output size.
+  private static final byte[] GZIP_TRAILER = new byte[]{0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00, 0x00};
+
+  private static final int GZIP_HEADER_LEN = GZIP_HEADER.length;
+  private static final int GZIP_TRAILER_LEN = GZIP_TRAILER.length;
+
+  private Deflater deflater;
+
+  private int headerOff = 0;
+  private int trailerOff = 0;
+
+  private int numExtraBytesWritten = 0;
+
+  private int currentBufLen = 0;
+
+  private final Checksum crc = DataChecksum.newCrc32();
+
+  private BuiltInGzipDecompressor.GzipStateLabel state;
+
+  public BuiltInGzipCompressor(Configuration conf) { init(conf); }
+
+  @Override
+  public boolean finished() {
+// Only if the trailer is also written, it is thought as finished.
+return deflater.finished() && state == 
BuiltInGzipDecompressor.GzipStateLabel.FINISHED;
+  }
+
+  @Override
+  public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  return deflater.needsInput();
+}
+
+return false;
+  }
+
+  @Override
+  public int compress(byte[] b, int off, int len) throws IOException {
+int compressedBytesWritten = 0;
+
+if (currentBufLen <= 0) {
+  return compressedBytesWritten;
+}
+
+// If we are not within uncompressed data yet, output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM &&
+state != BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC) {
+  int outputHeaderSize = writeHeader(b, off, len);
+  numExtraBytesWritten += outputHeaderSize;
+
+  compressedBytesWritten += outputHeaderSize;
+
+  if (outputHeaderSize == len) {
+return compressedBytesWritten;
+  }
+
+  off += outputHeaderSize;
+  len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  // now compress it into b[]
+  int deflated = deflater.deflate(b, off, len);
+
+  compressedBytesWritten += deflated;
+  off += deflated;
+  len -= deflated;
+
+  // All current input are processed. Going to output trailer.
+  if (deflater.finished()) {
+state = BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC;
+fillTrailer();
+  } else {
+return compressedBytesWritten;
+  }
+}
+
+int outputTrailerSize = writeTrailer(b, off, len);
+numExtraBytesWritten += outputTrailerSize;
+
+compressedBytesWritten += outputTrailerSize;
+
+return compressedBytesWritten;
+  }
+
+  @Override
+  public long getBytesRead() {
+return deflater.getTotalIn();
+  }
+
+  

[GitHub] [hadoop] viirya commented on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


viirya commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894499414


   > @viirya there are a few style issues in 
[here](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/19/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 - could you fix them?
   
   Oh, ok. Let me fix them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r684484896



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[]{
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+  // The trailer will be overwritten based on crc and output size.
+  private static final byte[] GZIP_TRAILER = new byte[]{0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00, 0x00};
+
+  private static final int GZIP_HEADER_LEN = GZIP_HEADER.length;
+  private static final int GZIP_TRAILER_LEN = GZIP_TRAILER.length;
+
+  private Deflater deflater;
+
+  private int headerOff = 0;
+  private int trailerOff = 0;
+
+  private int numExtraBytesWritten = 0;
+
+  private int currentBufLen = 0;
+
+  private final Checksum crc = DataChecksum.newCrc32();
+
+  private BuiltInGzipDecompressor.GzipStateLabel state;
+
+  public BuiltInGzipCompressor(Configuration conf) { init(conf); }
+
+  @Override
+  public boolean finished() {
+// Only if the trailer is also written, it is thought as finished.
+return deflater.finished() && state == 
BuiltInGzipDecompressor.GzipStateLabel.FINISHED;
+  }
+
+  @Override
+  public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  return deflater.needsInput();
+}
+
+return false;
+  }
+
+  @Override
+  public int compress(byte[] b, int off, int len) throws IOException {
+int compressedBytesWritten = 0;
+
+if (currentBufLen <= 0) {
+  return compressedBytesWritten;
+}
+
+// If we are not within uncompressed data yet, output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM &&
+state != BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC) {
+  int outputHeaderSize = writeHeader(b, off, len);
+  numExtraBytesWritten += outputHeaderSize;
+
+  compressedBytesWritten += outputHeaderSize;
+
+  if (outputHeaderSize == len) {
+return compressedBytesWritten;
+  }
+
+  off += outputHeaderSize;
+  len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  // now compress it into b[]
+  int deflated = deflater.deflate(b, off, len);
+
+  compressedBytesWritten += deflated;
+  off += deflated;
+  len -= deflated;
+
+  // All current input are processed. Going to output trailer.
+  if (deflater.finished()) {
+state = BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC;
+fillTrailer();
+  } else {
+return compressedBytesWritten;
+  }
+}
+
+int outputTrailerSize = writeTrailer(b, off, len);
+numExtraBytesWritten += outputTrailerSize;
+
+compressedBytesWritten += outputTrailerSize;
+
+return compressedBytesWritten;
+  }
+
+  @Override
+  public long getBytesRead() {
+return deflater.getTotalIn();
+  }
+
+  @Override
+  public long getBytesWritten() {
+return numExtraBytesWritten + deflater.getTotalOut();
+  }
+
+  @Override
+  public void end() {
+deflater.end();
+  }
+
+  @Override
+  public void finish() {
+deflater.finish();
+  }
+
+  private void init(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635474=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635474
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 20:18
Start Date: 06/Aug/21 20:18
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r684483080



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[]{
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+  // The trailer will be overwritten based on crc and output size.
+  private static final byte[] GZIP_TRAILER = new byte[]{0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00, 0x00};
+
+  private static final int GZIP_HEADER_LEN = GZIP_HEADER.length;
+  private static final int GZIP_TRAILER_LEN = GZIP_TRAILER.length;
+
+  private Deflater deflater;
+
+  private int headerOff = 0;
+  private int trailerOff = 0;
+
+  private int numExtraBytesWritten = 0;
+
+  private int currentBufLen = 0;
+
+  private final Checksum crc = DataChecksum.newCrc32();
+
+  private BuiltInGzipDecompressor.GzipStateLabel state;
+
+  public BuiltInGzipCompressor(Configuration conf) { init(conf); }
+
+  @Override
+  public boolean finished() {
+// Only if the trailer is also written, it is thought as finished.
+return deflater.finished() && state == 
BuiltInGzipDecompressor.GzipStateLabel.FINISHED;
+  }
+
+  @Override
+  public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  return deflater.needsInput();
+}
+
+return false;
+  }
+
+  @Override
+  public int compress(byte[] b, int off, int len) throws IOException {
+int compressedBytesWritten = 0;
+
+if (currentBufLen <= 0) {
+  return compressedBytesWritten;
+}
+
+// If we are not within uncompressed data yet, output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM &&
+state != BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC) {
+  int outputHeaderSize = writeHeader(b, off, len);
+  numExtraBytesWritten += outputHeaderSize;
+
+  compressedBytesWritten += outputHeaderSize;
+
+  if (outputHeaderSize == len) {
+return compressedBytesWritten;
+  }
+
+  off += outputHeaderSize;
+  len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  // now compress it into b[]
+  int deflated = deflater.deflate(b, off, len);
+
+  compressedBytesWritten += deflated;
+  off += deflated;
+  len -= deflated;
+
+  // All current input are processed. Going to output trailer.
+  if (deflater.finished()) {
+state = BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC;
+fillTrailer();
+  } else {
+return compressedBytesWritten;
+  }
+}
+
+int outputTrailerSize = writeTrailer(b, off, len);
+numExtraBytesWritten += outputTrailerSize;
+
+compressedBytesWritten += outputTrailerSize;
+
+return compressedBytesWritten;
+  }
+
+  @Override
+  public long getBytesRead() {
+return deflater.getTotalIn();
+  }
+
+  

[GitHub] [hadoop] viirya commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


viirya commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r684483080



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[]{
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+  // The trailer will be overwritten based on crc and output size.
+  private static final byte[] GZIP_TRAILER = new byte[]{0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00, 0x00};
+
+  private static final int GZIP_HEADER_LEN = GZIP_HEADER.length;
+  private static final int GZIP_TRAILER_LEN = GZIP_TRAILER.length;
+
+  private Deflater deflater;
+
+  private int headerOff = 0;
+  private int trailerOff = 0;
+
+  private int numExtraBytesWritten = 0;
+
+  private int currentBufLen = 0;
+
+  private final Checksum crc = DataChecksum.newCrc32();
+
+  private BuiltInGzipDecompressor.GzipStateLabel state;
+
+  public BuiltInGzipCompressor(Configuration conf) { init(conf); }
+
+  @Override
+  public boolean finished() {
+// Only if the trailer is also written, it is thought as finished.
+return deflater.finished() && state == 
BuiltInGzipDecompressor.GzipStateLabel.FINISHED;
+  }
+
+  @Override
+  public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  return deflater.needsInput();
+}
+
+return false;
+  }
+
+  @Override
+  public int compress(byte[] b, int off, int len) throws IOException {
+int compressedBytesWritten = 0;
+
+if (currentBufLen <= 0) {
+  return compressedBytesWritten;
+}
+
+// If we are not within uncompressed data yet, output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM &&
+state != BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC) {
+  int outputHeaderSize = writeHeader(b, off, len);
+  numExtraBytesWritten += outputHeaderSize;
+
+  compressedBytesWritten += outputHeaderSize;
+
+  if (outputHeaderSize == len) {
+return compressedBytesWritten;
+  }
+
+  off += outputHeaderSize;
+  len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  // now compress it into b[]
+  int deflated = deflater.deflate(b, off, len);
+
+  compressedBytesWritten += deflated;
+  off += deflated;
+  len -= deflated;
+
+  // All current input are processed. Going to output trailer.
+  if (deflater.finished()) {
+state = BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC;
+fillTrailer();
+  } else {
+return compressedBytesWritten;
+  }
+}
+
+int outputTrailerSize = writeTrailer(b, off, len);
+numExtraBytesWritten += outputTrailerSize;
+
+compressedBytesWritten += outputTrailerSize;
+
+return compressedBytesWritten;
+  }
+
+  @Override
+  public long getBytesRead() {
+return deflater.getTotalIn();
+  }
+
+  @Override
+  public long getBytesWritten() {
+return numExtraBytesWritten + deflater.getTotalOut();
+  }
+
+  @Override
+  public void end() {
+deflater.end();
+  }
+
+  @Override
+  public void finish() {
+deflater.finish();
+  }
+
+  private void init(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 

[jira] [Work logged] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?focusedWorklogId=635459=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635459
 ]

ASF GitHub Bot logged work on HADOOP-17837:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 19:20
Start Date: 06/Aug/21 19:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276#issuecomment-894469467


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3276/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux c2c6c97e1c21 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a3a87e46e3f941724656b6e79b599c3965244aa5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3276/1/testReport/ |
   | Max. process+thread count | 1267 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3276/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3276: HADOOP-17837: Add unresolved endpoint value to UnknownHostException (ADDENDUM)

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276#issuecomment-894469467


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 179m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3276/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux c2c6c97e1c21 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a3a87e46e3f941724656b6e79b599c3965244aa5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3276/1/testReport/ |
   | Max. process+thread count | 1267 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3276/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=635456=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635456
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 19:02
Start Date: 06/Aug/21 19:02
Worklog Time Spent: 10m 
  Work Description: dongjoon-hyun commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894459328


   +1 from my side although I didn't check the CI result here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635456)
Time Spent: 3h 20m  (was: 3h 10m)

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dongjoon-hyun commented on pull request #3274: HADOOP-17370. Upgrade commons-compress to 1.21

2021-08-06 Thread GitBox


dongjoon-hyun commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894459328


   +1 from my side although I didn't check the CI result here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=635455=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635455
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 19:01
Start Date: 06/Aug/21 19:01
Worklog Time Spent: 10m 
  Work Description: dongjoon-hyun commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894458890


   Thank you for picking up this, @aajisaka . :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635455)
Time Spent: 3h 10m  (was: 3h)

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dongjoon-hyun commented on pull request #3274: HADOOP-17370. Upgrade commons-compress to 1.21

2021-08-06 Thread GitBox


dongjoon-hyun commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894458890


   Thank you for picking up this, @aajisaka . :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-08-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394951#comment-17394951
 ] 

Íñigo Goiri commented on HADOOP-17787:
--

Thanks [~gautham] for the contribution.
The latest PR works fine.
Merged PR 3167 to trunk.

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2021-07-03-10-47-02-330.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-08-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17787.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: image-2021-07-03-10-47-02-330.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?focusedWorklogId=635449=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635449
 ]

ASF GitHub Bot logged work on HADOOP-17787:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:56
Start Date: 06/Aug/21 18:56
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635449)
Time Spent: 1h 40m  (was: 1.5h)

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-07-03-10-47-02-330.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #3167: HADOOP-17787. Refactor fetching of credentials in Jenkins

2021-08-06 Thread GitBox


goiri merged pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635434=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635434
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:24
Start Date: 06/Aug/21 18:24
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894439227


   @viirya there are a few style issues in 
[here](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/19/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 - could you fix them?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635434)
Time Spent: 15.5h  (was: 15h 20m)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 15.5h
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


sunchao commented on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-894439227


   @viirya there are a few style issues in 
[here](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/19/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 - could you fix them?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635432
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:22
Start Date: 06/Aug/21 18:22
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r684424851



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[]{
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+  // The trailer will be overwritten based on crc and output size.
+  private static final byte[] GZIP_TRAILER = new byte[]{0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00, 0x00};
+
+  private static final int GZIP_HEADER_LEN = GZIP_HEADER.length;
+  private static final int GZIP_TRAILER_LEN = GZIP_TRAILER.length;
+
+  private Deflater deflater;
+
+  private int headerOff = 0;
+  private int trailerOff = 0;
+
+  private int numExtraBytesWritten = 0;
+
+  private int currentBufLen = 0;
+
+  private final Checksum crc = DataChecksum.newCrc32();
+
+  private BuiltInGzipDecompressor.GzipStateLabel state;
+
+  public BuiltInGzipCompressor(Configuration conf) { init(conf); }
+
+  @Override
+  public boolean finished() {
+// Only if the trailer is also written, it is thought as finished.
+return deflater.finished() && state == 
BuiltInGzipDecompressor.GzipStateLabel.FINISHED;
+  }
+
+  @Override
+  public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  return deflater.needsInput();
+}
+
+return false;
+  }
+
+  @Override
+  public int compress(byte[] b, int off, int len) throws IOException {
+int compressedBytesWritten = 0;
+
+if (currentBufLen <= 0) {
+  return compressedBytesWritten;
+}
+
+// If we are not within uncompressed data yet, output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM &&
+state != BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC) {
+  int outputHeaderSize = writeHeader(b, off, len);
+  numExtraBytesWritten += outputHeaderSize;
+
+  compressedBytesWritten += outputHeaderSize;
+
+  if (outputHeaderSize == len) {
+return compressedBytesWritten;
+  }
+
+  off += outputHeaderSize;
+  len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  // now compress it into b[]
+  int deflated = deflater.deflate(b, off, len);
+
+  compressedBytesWritten += deflated;
+  off += deflated;
+  len -= deflated;
+
+  // All current input are processed. Going to output trailer.
+  if (deflater.finished()) {
+state = BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC;
+fillTrailer();
+  } else {
+return compressedBytesWritten;
+  }
+}
+
+int outputTrailerSize = writeTrailer(b, off, len);
+numExtraBytesWritten += outputTrailerSize;
+
+compressedBytesWritten += outputTrailerSize;
+
+return compressedBytesWritten;
+  }
+
+  @Override
+  public long getBytesRead() {
+return deflater.getTotalIn();
+  }
+
+  

[GitHub] [hadoop] sunchao commented on a change in pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


sunchao commented on a change in pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#discussion_r684424851



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress.zlib;
+
+import java.io.IOException;
+import java.util.zip.Checksum;
+import java.util.zip.Deflater;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.DoNotPool;
+import org.apache.hadoop.util.DataChecksum;
+
+/**
+ * A {@link Compressor} based on the popular gzip compressed file format.
+ * http://www.gzip.org/
+ */
+@DoNotPool
+public class BuiltInGzipCompressor implements Compressor {
+
+  /**
+   * Fixed ten-byte gzip header. See {@link GZIPOutputStream}'s source for
+   * details.
+   */
+  private static final byte[] GZIP_HEADER = new byte[]{
+  0x1f, (byte) 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+  // The trailer will be overwritten based on crc and output size.
+  private static final byte[] GZIP_TRAILER = new byte[]{0x00, 0x00, 0x00, 
0x00, 0x00, 0x00, 0x00, 0x00};
+
+  private static final int GZIP_HEADER_LEN = GZIP_HEADER.length;
+  private static final int GZIP_TRAILER_LEN = GZIP_TRAILER.length;
+
+  private Deflater deflater;
+
+  private int headerOff = 0;
+  private int trailerOff = 0;
+
+  private int numExtraBytesWritten = 0;
+
+  private int currentBufLen = 0;
+
+  private final Checksum crc = DataChecksum.newCrc32();
+
+  private BuiltInGzipDecompressor.GzipStateLabel state;
+
+  public BuiltInGzipCompressor(Configuration conf) { init(conf); }
+
+  @Override
+  public boolean finished() {
+// Only if the trailer is also written, it is thought as finished.
+return deflater.finished() && state == 
BuiltInGzipDecompressor.GzipStateLabel.FINISHED;
+  }
+
+  @Override
+  public boolean needsInput() {
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  return deflater.needsInput();
+}
+
+return false;
+  }
+
+  @Override
+  public int compress(byte[] b, int off, int len) throws IOException {
+int compressedBytesWritten = 0;
+
+if (currentBufLen <= 0) {
+  return compressedBytesWritten;
+}
+
+// If we are not within uncompressed data yet, output the header.
+if (state != BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM &&
+state != BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC) {
+  int outputHeaderSize = writeHeader(b, off, len);
+  numExtraBytesWritten += outputHeaderSize;
+
+  compressedBytesWritten += outputHeaderSize;
+
+  if (outputHeaderSize == len) {
+return compressedBytesWritten;
+  }
+
+  off += outputHeaderSize;
+  len -= outputHeaderSize;
+}
+
+if (state == BuiltInGzipDecompressor.GzipStateLabel.INFLATE_STREAM) {
+  // now compress it into b[]
+  int deflated = deflater.deflate(b, off, len);
+
+  compressedBytesWritten += deflated;
+  off += deflated;
+  len -= deflated;
+
+  // All current input are processed. Going to output trailer.
+  if (deflater.finished()) {
+state = BuiltInGzipDecompressor.GzipStateLabel.TRAILER_CRC;
+fillTrailer();
+  } else {
+return compressedBytesWritten;
+  }
+}
+
+int outputTrailerSize = writeTrailer(b, off, len);
+numExtraBytesWritten += outputTrailerSize;
+
+compressedBytesWritten += outputTrailerSize;
+
+return compressedBytesWritten;
+  }
+
+  @Override
+  public long getBytesRead() {
+return deflater.getTotalIn();
+  }
+
+  @Override
+  public long getBytesWritten() {
+return numExtraBytesWritten + deflater.getTotalOut();
+  }
+
+  @Override
+  public void end() {
+deflater.end();
+  }
+
+  @Override
+  public void finish() {
+deflater.finish();
+  }
+
+  private void init(Configuration conf) {
+ZlibCompressor.CompressionLevel level = 
ZlibFactory.getCompressionLevel(conf);
+ZlibCompressor.CompressionStrategy strategy = 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635423=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635423
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:09
Start Date: 06/Aug/21 18:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-892989941






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635423)
Time Spent: 15h 10m  (was: 15h)

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 15h 10m
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17840?focusedWorklogId=635422=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635422
 ]

ASF GitHub Bot logged work on HADOOP-17840:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:09
Start Date: 06/Aug/21 18:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275#issuecomment-894430448


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 39s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m  3s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 114m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3275 |
   | JIRA Issue | HADOOP-17840 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 461dc989ff37 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / bfd7b946bffddad1bff4894f7c6e37a377f5ca24 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/testReport/ |
   | Max. process+thread count | 1388 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635422)
Time Spent: 40m  (was: 0.5h)

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


hadoop-yetus removed a comment on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-892989941






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635420=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635420
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:08
Start Date: 06/Aug/21 18:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-892363546


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 15 new + 332 
unchanged - 0 fixed = 347 total (was 332)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 177m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 205b08ba081f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a1bb194f29612863d8ad31971a737b30be4d982 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/testReport/ |
   | Max. process+thread count | 1267 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=635419=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635419
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 18:08
Start Date: 06/Aug/21 18:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-891453787


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 156 new + 332 
unchanged - 0 fixed = 488 total (was 332)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0f0b4b015c68 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 49619e3f1ebdc62c89bcb74fd5cbf75a80a0601c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/testReport/ |
   | Max. process+thread count | 2952 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394933#comment-17394933
 ] 

Hadoop QA commented on HADOOP-17840:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} codespell {color} | {color:blue}  0m  
1s{color} |  | {color:blue} codespell was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
39s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  2m  
7s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
13s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
13s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  2m 
15s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
52s{color} |  | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} |  | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  3s{color} | 
 | {color:black}{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/3275 |
| JIRA Issue | HADOOP-17840 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient spotbugs checkstyle codespell |
| uname | Linux 461dc989ff37 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | branch-3.2 / bfd7b946bffddad1bff4894f7c6e37a377f5ca24 |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
|  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/testReport/ |
| Max. process+thread count | 1388 (vs. ulimit of 5500) |
| modules | C: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3275: HADOOP-17840: Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275#issuecomment-894430448


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 39s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m  3s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 114m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3275 |
   | JIRA Issue | HADOOP-17840 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 461dc989ff37 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / bfd7b946bffddad1bff4894f7c6e37a377f5ca24 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/testReport/ |
   | Max. process+thread count | 1388 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


hadoop-yetus removed a comment on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-892363546


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 15 new + 332 
unchanged - 0 fixed = 347 total (was 332)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 177m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 205b08ba081f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a1bb194f29612863d8ad31971a737b30be4d982 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/testReport/ |
   | Max. process+thread count | 1267 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-08-06 Thread GitBox


hadoop-yetus removed a comment on pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250#issuecomment-891453787


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 156 new + 332 
unchanged - 0 fixed = 488 total (was 332)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3250 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0f0b4b015c68 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 49619e3f1ebdc62c89bcb74fd5cbf75a80a0601c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/testReport/ |
   | Max. process+thread count | 2952 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To 

[jira] [Work logged] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17840?focusedWorklogId=635367=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635367
 ]

ASF GitHub Bot logged work on HADOOP-17840:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 16:31
Start Date: 06/Aug/21 16:31
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula commented on pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275#issuecomment-894376137


   LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635367)
Time Spent: 0.5h  (was: 20m)

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula commented on pull request #3275: HADOOP-17840: Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread GitBox


brahmareddybattula commented on pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275#issuecomment-894376137


   LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-17837.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

[~bbeaudreault] thanks raising the PR. Committed to trunk and branch-3.3.. As 
this only test assertion, ran locally and pushed.

> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=635364=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635364
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 16:25
Start Date: 06/Aug/21 16:25
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894372780


   Update the license file. The license file is under the root directory, 
therefore all the unit tests will be run in the precommit job.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635364)
Time Spent: 3h  (was: 2h 50m)

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?focusedWorklogId=635363=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635363
 ]

ASF GitHub Bot logged work on HADOOP-17837:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 16:24
Start Date: 06/Aug/21 16:24
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula merged pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635363)
Time Spent: 1h 20m  (was: 1h 10m)

> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3274: HADOOP-17370. Upgrade commons-compress to 1.21

2021-08-06 Thread GitBox


aajisaka commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894372780


   Update the license file. The license file is under the root directory, 
therefore all the unit tests will be run in the precommit job.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?focusedWorklogId=635362=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635362
 ]

ASF GitHub Bot logged work on HADOOP-17837:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 16:24
Start Date: 06/Aug/21 16:24
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula commented on pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276#issuecomment-894372234


   +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635362)
Time Spent: 1h 10m  (was: 1h)

> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula merged pull request #3276: HADOOP-17837: Add unresolved endpoint value to UnknownHostException (ADDENDUM)

2021-08-06 Thread GitBox


brahmareddybattula merged pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula commented on pull request #3276: HADOOP-17837: Add unresolved endpoint value to UnknownHostException (ADDENDUM)

2021-08-06 Thread GitBox


brahmareddybattula commented on pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276#issuecomment-894372234


   +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394870#comment-17394870
 ] 

Bryan Beaudreault commented on HADOOP-17837:


[~weichiu] per suggestion from Brahma, I filed an addendum PR to account for 
[~ste...@apache.org]'s comment on the merged PR. Would you mind merging when 
ready? This has also been applied to the backport PR in 
https://issues.apache.org/jira/browse/HADOOP-17840

> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault reopened HADOOP-17837:


> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17837) Make it easier to debug UnknownHostExceptions from NetUtils.connect

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17837?focusedWorklogId=635360=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635360
 ]

ASF GitHub Bot logged work on HADOOP-17837:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 16:19
Start Date: 06/Aug/21 16:19
Worklog Time Spent: 10m 
  Work Description: bbeaudreault opened a new pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276


   Addresses a comment made in https://github.com/apache/hadoop/pull/3272.
   
   The other comment about concatenating the UnresolvedAddressException message 
is unnecessary. In all versions of java it's impossible to add a message to an 
UnresolvedAddressException.  See 
[java11](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html),
 there is only a no-args constructor and no `setMessage()` method.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635360)
Time Spent: 1h  (was: 50m)

> Make it easier to debug UnknownHostExceptions from NetUtils.connect
> ---
>
> Key: HADOOP-17837
> URL: https://issues.apache.org/jira/browse/HADOOP-17837
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Most UnknownHostExceptions thrown throughout hadoop include a useful message, 
> either the hostname that was not found or some other descriptor of the 
> problem. The UnknownHostException thrown from NetUtils.connect only includes 
> the [message of the underlying 
> UnresolvedAddressException|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java#L592].
>  If you take a look at the source for UnresolvedAddressException, [it only 
> has a no-args 
> constructor|https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html]
>  (java11, but same is true in other versions). So it never has a message, 
> meaning the UnknownHostException message is empty.
> We should include the endpoint.toString() in the UnknownHostException thrown 
> by NetUtils.connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bbeaudreault opened a new pull request #3276: HADOOP-17837: Add unresolved endpoint value to UnknownHostException (ADDENDUM)

2021-08-06 Thread GitBox


bbeaudreault opened a new pull request #3276:
URL: https://github.com/apache/hadoop/pull/3276


   Addresses a comment made in https://github.com/apache/hadoop/pull/3272.
   
   The other comment about concatenating the UnresolvedAddressException message 
is unnecessary. In all versions of java it's impossible to add a message to an 
UnresolvedAddressException.  See 
[java11](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/channels/UnresolvedAddressException.html),
 there is only a no-args constructor and no `setMessage()` method.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread Bryan Beaudreault (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394866#comment-17394866
 ] 

Bryan Beaudreault commented on HADOOP-17840:


[~brahmareddy] ok I updated the PR here. Will take care of trunk shortly. 
Thanks for the guidance.

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-06 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394851#comment-17394851
 ] 

Brahma Reddy Battula commented on HADOOP-17370:
---

[~aajisaka] thanks addressing this issue. Looks PR #2452 is closed by 
mentioning that there are so many test case failures.. could please confirm the 
same..? and license file also updated.

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=635347=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635347
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 15:53
Start Date: 06/Aug/21 15:53
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894353228


   Looks PR #2452 is closed by mentioning that there are so many test case 
failures.. could please confirm the same..? and license file also updated.?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635347)
Time Spent: 2h 50m  (was: 2h 40m)

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula commented on pull request #3274: HADOOP-17370. Upgrade commons-compress to 1.21

2021-08-06 Thread GitBox


brahmareddybattula commented on pull request #3274:
URL: https://github.com/apache/hadoop/pull/3274#issuecomment-894353228


   Looks PR #2452 is closed by mentioning that there are so many test case 
failures.. could please confirm the same..? and license file also updated.?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16752) ABFS: test failure testLastModifiedTime()

2021-08-06 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394839#comment-17394839
 ] 

Brahma Reddy Battula commented on HADOOP-16752:
---

HI [~DanielZhou] and [~ste...@apache.org] 

Target version given as 3.2.2 which is already released.. can you guys planning 
to target to 3.2.3..? Please let me know, I am planing to cut the release for 
3.2.3

> ABFS: test failure testLastModifiedTime()
> -
>
> Key: HADOOP-16752
> URL: https://issues.apache.org/jira/browse/HADOOP-16752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Da Zhou
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> java.lang.AssertionError: lastModifiedTime should be after minCreateStartTime
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemFileStatus.testLastModifiedTime(ITestAzureBlobFileSystemFileStatus.java:138)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17708) Fail to build hadoop-common from source on Fedora

2021-08-06 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394836#comment-17394836
 ] 

Brahma Reddy Battula commented on HADOOP-17708:
---

[~bioinfornatics] did you apply patch  as suggested by [~iwasakims] and ran 
again..? Currently i removed the target version 3.2.2 which released.

> Fail to build hadoop-common from source on Fedora
> -
>
> Key: HADOOP-17708
> URL: https://issues.apache.org/jira/browse/HADOOP-17708
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan mercier
>Priority: Major
>
> Dear I tried to build hadoop from source with a vanilla fedora 34
> {code:bash}
> dnf group install -y "Development Tools" \
>  && dnf install -y java-1.8.0-openjdk-devel fuse-devel snappy-java 
> snappy-devel jansson-devel protobuf zlib-devel libzstd-devel \
>maven-1:3.6.3 cmake gcc-c++ ant protobuf-compiler 
> protobuf-java slf4j 
> export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-0.fc34.x86_64/
> export MAVEN_OPTS="-Xms2048M -Xmx4096M"
> export 
> PATH="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-0.fc34.x86_64/bin/:$PATH"
> export CC=/usr/bin/gcc
> export CXX=/usr/bin/g++
> curl -LO 
> https://apache.mediamirrors.org/hadoop/common/hadoop-3.2.2/hadoop-3.2.2-src.tar.gz
> tar xf hadoop-3.2.2-src.tar.gz && cd hadoop-3.2.2-src
> mvn package -Pdist,native -Drequire.snappy=true  -DskipTests -Dtar
> {code}
> But I have this error
> {code:java}
> at org.apache.hadoop.maven.plugin.cmakebuilder.CompileMojo.runMake 
> (CompileMojo.java:229)
>   
>  
> at org.apache.hadoop.maven.plugin.cmakebuilder.CompileMojo.execute 
> (CompileMojo.java:98) 
>   
>  
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)  
>   
>
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)   
>   
> 
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)   
>   
> 
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)   
>   
> 
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117) 
>   
>  
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)  
>   
>  
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)  
>   
>
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)   
>   
> 
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
>   
>   
>   
> at 

[jira] [Updated] (HADOOP-17708) Fail to build hadoop-common from source on Fedora

2021-08-06 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-17708:
--
Target Version/s:   (was: 3.2.2)

> Fail to build hadoop-common from source on Fedora
> -
>
> Key: HADOOP-17708
> URL: https://issues.apache.org/jira/browse/HADOOP-17708
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan mercier
>Priority: Major
>
> Dear I tried to build hadoop from source with a vanilla fedora 34
> {code:bash}
> dnf group install -y "Development Tools" \
>  && dnf install -y java-1.8.0-openjdk-devel fuse-devel snappy-java 
> snappy-devel jansson-devel protobuf zlib-devel libzstd-devel \
>maven-1:3.6.3 cmake gcc-c++ ant protobuf-compiler 
> protobuf-java slf4j 
> export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-0.fc34.x86_64/
> export MAVEN_OPTS="-Xms2048M -Xmx4096M"
> export 
> PATH="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-0.fc34.x86_64/bin/:$PATH"
> export CC=/usr/bin/gcc
> export CXX=/usr/bin/g++
> curl -LO 
> https://apache.mediamirrors.org/hadoop/common/hadoop-3.2.2/hadoop-3.2.2-src.tar.gz
> tar xf hadoop-3.2.2-src.tar.gz && cd hadoop-3.2.2-src
> mvn package -Pdist,native -Drequire.snappy=true  -DskipTests -Dtar
> {code}
> But I have this error
> {code:java}
> at org.apache.hadoop.maven.plugin.cmakebuilder.CompileMojo.runMake 
> (CompileMojo.java:229)
>   
>  
> at org.apache.hadoop.maven.plugin.cmakebuilder.CompileMojo.execute 
> (CompileMojo.java:98) 
>   
>  
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)  
>   
>
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)   
>   
> 
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)   
>   
> 
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)   
>   
> 
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117) 
>   
>  
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)  
>   
>  
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)  
>   
>
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)   
>   
> 
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
>   
>   
>   
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
>   
>

[jira] [Commented] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394822#comment-17394822
 ] 

Brahma Reddy Battula commented on HADOOP-17840:
---

[~bbeaudreault] for branch-3.2 you can update here.. and for trunk, you can 
upload the testutil change to same jira as a addendum patch.

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17835?focusedWorklogId=635163=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635163
 ]

ASF GitHub Bot logged work on HADOOP-17835:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 14:10
Start Date: 06/Aug/21 14:10
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #3266:
URL: https://github.com/apache/hadoop/pull/3266#issuecomment-894288381


   Thanks for the reviews. @aajisaka we have @eolivelli's +1 for this PR after 
latest revision.
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635163)
Time Spent: 3h  (was: 2h 50m)

> Use CuratorCache implementation instead of PathChildrenCache / TreeCache
> 
>
> Key: HADOOP-17835
> URL: https://issues.apache.org/jira/browse/HADOOP-17835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
> CuratorCache service implementation in place of deprecated PathChildrenCache 
> and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #3266: HADOOP-17835. Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread GitBox


virajjasani commented on pull request #3266:
URL: https://github.com/apache/hadoop/pull/3266#issuecomment-894288381


   Thanks for the reviews. @aajisaka we have @eolivelli's +1 for this PR after 
latest revision.
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635159
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:53
Start Date: 06/Aug/21 13:53
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684253936



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties
##
@@ -12,8 +12,10 @@
 
 # log4j configuration used during build and unit tests
 
-log4j.rootLogger=info,stdout
+log4j.rootLogger=debug,stdout
 log4j.threshold=ALL
 log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2} 
(%F:%M(%L)) - %m%n
+log4j.logger.io.netty=DEBUG
+log4j.logger.org.apache.hadoop.mapred=DEBUG

Review comment:
   Thanks, fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635159)
Time Spent: 4h 50m  (was: 4h 40m)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684253936



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/resources/log4j.properties
##
@@ -12,8 +12,10 @@
 
 # log4j configuration used during build and unit tests
 
-log4j.rootLogger=info,stdout
+log4j.rootLogger=debug,stdout
 log4j.threshold=ALL
 log4j.appender.stdout=org.apache.log4j.ConsoleAppender
 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2} 
(%F:%M(%L)) - %m%n
+log4j.logger.io.netty=DEBUG
+log4j.logger.org.apache.hadoop.mapred=DEBUG

Review comment:
   Thanks, fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635158=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635158
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:52
Start Date: 06/Aug/21 13:52
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684253185



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -204,6 +828,34 @@ protected boolean isSocketKeepAlive() {
 }
   }
 
+  @Rule
+  public TestName name = new TestName();
+  
+  @Before
+  public void setup() {
+TEST_EXECUTION = new TestExecution(DEBUG_MODE, USE_PROXY);
+  }
+  
+  @After
+  public void tearDown() {
+int port = TEST_EXECUTION.shuffleHandlerPort();
+if (isPortUsed(port)) {
+  String msg = String.format("Port is being used: %d. " +
+  "Current testcase name: %s",
+  port, name.getMethodName());
+  throw new IllegalStateException(msg);
+}
+  }
+
+  private static boolean isPortUsed(int port) {
+try (Socket ignored = new Socket("localhost", port)) {
+  return true;
+} catch (IOException e) {
+  LOG.error("Port: {}, port check result: {}", port, e.getMessage());
+  return false;
+}
+  }
+

Review comment:
   Good point, added explicit check for port=0




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635158)
Time Spent: 4h 40m  (was: 4.5h)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684253185



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -204,6 +828,34 @@ protected boolean isSocketKeepAlive() {
 }
   }
 
+  @Rule
+  public TestName name = new TestName();
+  
+  @Before
+  public void setup() {
+TEST_EXECUTION = new TestExecution(DEBUG_MODE, USE_PROXY);
+  }
+  
+  @After
+  public void tearDown() {
+int port = TEST_EXECUTION.shuffleHandlerPort();
+if (isPortUsed(port)) {
+  String msg = String.format("Port is being used: %d. " +
+  "Current testcase name: %s",
+  port, name.getMethodName());
+  throw new IllegalStateException(msg);
+}
+  }
+
+  private static boolean isPortUsed(int port) {
+try (Socket ignored = new Socket("localhost", port)) {
+  return true;
+} catch (IOException e) {
+  LOG.error("Port: {}, port check result: {}", port, e.getMessage());
+  return false;
+}
+  }
+

Review comment:
   Good point, added explicit check for port=0




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635157
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:44
Start Date: 06/Aug/21 13:44
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684246707



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -204,6 +828,34 @@ protected boolean isSocketKeepAlive() {
 }
   }
 
+  @Rule
+  public TestName name = new TestName();
+  
+  @Before
+  public void setup() {
+TEST_EXECUTION = new TestExecution(DEBUG_MODE, USE_PROXY);

Review comment:
   DEBUG_MODE is a boolean flag and it's set to false.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635157)
Time Spent: 4.5h  (was: 4h 20m)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684246707



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -204,6 +828,34 @@ protected boolean isSocketKeepAlive() {
 }
   }
 
+  @Rule
+  public TestName name = new TestName();
+  
+  @Before
+  public void setup() {
+TEST_EXECUTION = new TestExecution(DEBUG_MODE, USE_PROXY);

Review comment:
   DEBUG_MODE is a boolean flag and it's set to false.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635156
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:42
Start Date: 06/Aug/21 13:42
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684245712



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684245712



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private static class InputStreamReadResult {
+final String asString;
+int totalBytesRead;
+
+public InputStreamReadResult(byte[] bytes, int totalBytesRead) {
+  this.asString = new String(bytes, StandardCharsets.UTF_8);
+  this.totalBytesRead = totalBytesRead;
+}
+  }
+
+  private static abstract class AdditionalMapOutputSenderOperations {
+public abstract ChannelFuture perform(ChannelHandlerContext ctx, 

[jira] [Work logged] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17840?focusedWorklogId=635154=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635154
 ]

ASF GitHub Bot logged work on HADOOP-17840:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:38
Start Date: 06/Aug/21 13:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275#issuecomment-894266723


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 57s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m  8s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  14m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 45s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3275 |
   | JIRA Issue | HADOOP-17840 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 94246bdcc202 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 71b07f76a9b2d33fefe01706e3fe47eb47bbb18b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/testReport/ |
   | Max. process+thread count | 1391 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635154)
Time Spent: 20m  (was: 10m)

> Backport HADOOP-17837 to branch-3.2
> ---
>
> Key: HADOOP-17840
> URL: https://issues.apache.org/jira/browse/HADOOP-17840
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3275: HADOOP-17840: Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread GitBox


hadoop-yetus commented on pull request #3275:
URL: https://github.com/apache/hadoop/pull/3275#issuecomment-894266723


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 57s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m  8s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  14m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 45s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3275 |
   | JIRA Issue | HADOOP-17840 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 94246bdcc202 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 71b07f76a9b2d33fefe01706e3fe47eb47bbb18b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/testReport/ |
   | Max. process+thread count | 1391 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17840) Backport HADOOP-17837 to branch-3.2

2021-08-06 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394770#comment-17394770
 ] 

Hadoop QA commented on HADOOP-17840:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
18s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} codespell {color} | {color:blue}  0m  
0s{color} |  | {color:blue} codespell was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
57s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  2m  
7s{color} |  | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
17s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  2m 
15s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 40s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
45s{color} |  | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} |  | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 41s{color} | 
 | {color:black}{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/3275 |
| JIRA Issue | HADOOP-17840 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient spotbugs checkstyle codespell |
| uname | Linux 94246bdcc202 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | branch-3.2 / 71b07f76a9b2d33fefe01706e3fe47eb47bbb18b |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
|  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3275/1/testReport/ |
| Max. process+thread count | 1391 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635149
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:30
Start Date: 06/Aug/21 13:30
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684236781



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684236781



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private static class InputStreamReadResult {
+final String asString;
+int totalBytesRead;
+
+public InputStreamReadResult(byte[] bytes, int totalBytesRead) {
+  this.asString = new String(bytes, StandardCharsets.UTF_8);
+  this.totalBytesRead = totalBytesRead;
+}
+  }
+
+  private static abstract class AdditionalMapOutputSenderOperations {
+public abstract ChannelFuture perform(ChannelHandlerContext ctx, 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635148
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:29
Start Date: 06/Aug/21 13:29
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684236151



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684236151



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private static class InputStreamReadResult {
+final String asString;
+int totalBytesRead;
+
+public InputStreamReadResult(byte[] bytes, int totalBytesRead) {
+  this.asString = new String(bytes, StandardCharsets.UTF_8);
+  this.totalBytesRead = totalBytesRead;
+}
+  }
+
+  private static abstract class AdditionalMapOutputSenderOperations {
+public abstract ChannelFuture perform(ChannelHandlerContext ctx, 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635142=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635142
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:18
Start Date: 06/Aug/21 13:18
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684227941



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684227941



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private static class InputStreamReadResult {
+final String asString;
+int totalBytesRead;
+
+public InputStreamReadResult(byte[] bytes, int totalBytesRead) {
+  this.asString = new String(bytes, StandardCharsets.UTF_8);
+  this.totalBytesRead = totalBytesRead;
+}
+  }
+
+  private static abstract class AdditionalMapOutputSenderOperations {
+public abstract ChannelFuture perform(ChannelHandlerContext ctx, 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635134=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635134
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:02
Start Date: 06/Aug/21 13:02
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684217183



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635135
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:02
Start Date: 06/Aug/21 13:02
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684217440



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635133=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635133
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 13:02
Start Date: 06/Aug/21 13:02
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684216985



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;

Review comment:
   Modified the code to send the header only N times.

##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684217440



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private static class InputStreamReadResult {
+final String asString;
+int totalBytesRead;
+
+public InputStreamReadResult(byte[] bytes, int totalBytesRead) {
+  this.asString = new String(bytes, StandardCharsets.UTF_8);
+  this.totalBytesRead = totalBytesRead;
+}
+  }
+
+  private static abstract class AdditionalMapOutputSenderOperations {
+public abstract ChannelFuture perform(ChannelHandlerContext ctx, 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684217183



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;
+}
+
+private void setHeaderSize(long headerSize) {
+  this.headerSize = headerSize;
+  long contentLengthOfAllHeaders = actualHeaderWriteCount * headerSize;
+  this.contentLengthOfResponse = 
computeContentLengthOfResponse(contentLengthOfAllHeaders);
+  LOG.debug("Content-length of all headers: {}", 
contentLengthOfAllHeaders);
+  LOG.debug("Content-length of one MapOutput: {}", 
contentLengthOfOneMapOutput);
+  LOG.debug("Content-length of final HTTP response: {}", 
contentLengthOfResponse);
+}
+
+private long computeContentLengthOfResponse(long 
contentLengthOfAllHeaders) {
+  int mapOutputCountMultiplier = mapOutputCount;
+  if (mapOutputCount == 0) {
+mapOutputCountMultiplier = 1;
+  }
+  return (contentLengthOfAllHeaders + contentLengthOfOneMapOutput) * 
mapOutputCountMultiplier;
+}
+  }
+  
+  private enum ShuffleUrlType {
+SIMPLE, WITH_KEEPALIVE, WITH_KEEPALIVE_MULTIPLE_MAP_IDS, 
WITH_KEEPALIVE_NO_MAP_IDS
+  }
+
+  private static class InputStreamReadResult {
+final String asString;
+int totalBytesRead;
+
+public InputStreamReadResult(byte[] bytes, int totalBytesRead) {
+  this.asString = new String(bytes, StandardCharsets.UTF_8);
+  this.totalBytesRead = totalBytesRead;
+}
+  }
+
+  private static abstract class AdditionalMapOutputSenderOperations {
+public abstract ChannelFuture perform(ChannelHandlerContext ctx, 

[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684216985



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);
+  }
+}
+  }
+  
+  private static class ResponseConfig {
+private static final int ONE_HEADER_DISPLACEMENT = 1;
+
+private final int headerWriteCount;
+private final long actualHeaderWriteCount;
+private final int mapOutputCount;
+private final int contentLengthOfOneMapOutput;
+private long headerSize;
+public long contentLengthOfResponse;
+
+public ResponseConfig(int headerWriteCount, int mapOutputCount, int 
contentLengthOfOneMapOutput) {
+  if (mapOutputCount <= 0 && contentLengthOfOneMapOutput > 0) {
+throw new IllegalStateException("mapOutputCount should be at least 1");
+  }
+  this.headerWriteCount = headerWriteCount;
+  this.mapOutputCount = mapOutputCount;
+  this.contentLengthOfOneMapOutput = contentLengthOfOneMapOutput;
+  //MapOutputSender#send will send header N + 1 times
+  //So, (N + 1) * headerSize should be the Content-length header + the 
expected Content-length as well
+  this.actualHeaderWriteCount = headerWriteCount + ONE_HEADER_DISPLACEMENT;

Review comment:
   Modified the code to send the header only N times.

##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean 

[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635131=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635131
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 12:50
Start Date: 06/Aug/21 12:50
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684209421



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);

Review comment:
   good point, fixed.
   You got it correctly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635131)
Time Spent: 3h 10m  (was: 3h)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684209421



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;
+private static final int DEBUG_FRIENDLY_KEEP_ALIVE = 1000;
+private static final int DEFAULT_PORT = 0; //random port
+private static final int FIXED_PORT = 8088;
+private static final String PROXY_HOST = "127.0.0.1";
+private static final int PROXY_PORT = ;
+private final boolean debugMode;
+private final boolean useProxy;
+
+public TestExecution(boolean debugMode, boolean useProxy) {
+  this.debugMode = debugMode;
+  this.useProxy = useProxy;
+}
+
+int getKeepAliveTimeout() {
+  if (debugMode) {
+return DEBUG_FRIENDLY_KEEP_ALIVE;
+  }
+  return DEFAULT_KEEP_ALIVE_TIMEOUT;
+}
+
+HttpURLConnection openConnection(URL url) throws IOException {
+  HttpURLConnection conn;
+  if (useProxy) {
+Proxy proxy
+= new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_HOST, 
PROXY_PORT));
+conn = (HttpURLConnection) url.openConnection(proxy);
+  } else {
+conn = (HttpURLConnection) url.openConnection();
+  }
+  return conn;
+}
+
+int shuffleHandlerPort() {
+  if (debugMode) {
+return FIXED_PORT;
+  } else {
+return DEFAULT_PORT;
+  }
+}
+
+void parameterizeConnection(URLConnection conn) {
+  if (DEBUG_MODE) {
+conn.setReadTimeout(100);
+conn.setConnectTimeout(100);

Review comment:
   good point, fixed.
   You got it correctly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635130=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635130
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 12:48
Start Date: 06/Aug/21 12:48
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684208056



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;

Review comment:
   Good point, fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635130)
Time Spent: 3h  (was: 2h 50m)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684208056



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
##
@@ -106,10 +129,584 @@
   LoggerFactory.getLogger(TestShuffleHandler.class);
   private static final File ABS_LOG_DIR = GenericTestUtils.getTestDir(
   TestShuffleHandler.class.getSimpleName() + "LocDir");
+  private static final long ATTEMPT_ID = 12345L;
+  private static final long ATTEMPT_ID_2 = 12346L;
+  
+
+  //Control test execution properties with these flags
+  private static final boolean DEBUG_MODE = false;
+  //WARNING: If this is set to true and proxy server is not running, tests 
will fail!
+  private static final boolean USE_PROXY = false;
+  private static final int HEADER_WRITE_COUNT = 10;
+  private static TestExecution TEST_EXECUTION;
+
+  private static class TestExecution {
+private static final int DEFAULT_KEEP_ALIVE_TIMEOUT = -100;

Review comment:
   Good point, fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635124
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 12:42
Start Date: 06/Aug/21 12:42
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684204215



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -1322,18 +1441,16 @@ protected void sendError(ChannelHandlerContext ctx, 
String msg,
   for (Map.Entry header : headers.entrySet()) {
 response.headers().set(header.getKey(), header.getValue());
   }
-  response.setContent(
-  ChannelBuffers.copiedBuffer(msg, CharsetUtil.UTF_8));
 
   // Close the connection as soon as the error message is sent.
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  writeToChannelAndClose(ctx.channel(), response);
 }
 
 @Override
-public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
+public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
 throws Exception {
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  LOG.debug("Executing exceptionCaught");

Review comment:
   Fixed, thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635124)
Time Spent: 2h 50m  (was: 2h 40m)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684204215



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -1322,18 +1441,16 @@ protected void sendError(ChannelHandlerContext ctx, 
String msg,
   for (Map.Entry header : headers.entrySet()) {
 response.headers().set(header.getKey(), header.getValue());
   }
-  response.setContent(
-  ChannelBuffers.copiedBuffer(msg, CharsetUtil.UTF_8));
 
   // Close the connection as soon as the error message is sent.
-  
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
+  writeToChannelAndClose(ctx.channel(), response);
 }
 
 @Override
-public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
+public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
 throws Exception {
-  Channel ch = e.getChannel();
-  Throwable cause = e.getCause();
+  LOG.debug("Executing exceptionCaught");

Review comment:
   Fixed, thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635123
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 12:41
Start Date: 06/Aug/21 12:41
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684203696



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -920,31 +1002,50 @@ public void channelOpen(ChannelHandlerContext ctx, 
ChannelStateEvent evt)
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}. Accepted number of connections={}",
+ctx.channel(), acceptedConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  LOG.trace("Executing channelInactive");
+  super.channelInactive(ctx);
+  acceptedConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}",
+  acceptedConnections.get());
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
+  LOG.trace("Executing channelRead");
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}", request);
+  if (request.method() != GET) {
   sendError(ctx, METHOD_NOT_ALLOWED);
   return;
   }
   // Check whether the shuffle version is compatible
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  if (request.headers() != null) {
+shuffleVersion = request.headers()
+.get(ShuffleHeader.HTTP_HEADER_VERSION);
+  }
+  LOG.debug("Shuffle version: {}", shuffleVersion);
   if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(
   request.headers() != null ?
   request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null)
   || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(
   request.headers() != null ?
-  request.headers()
-  .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) {
+  shuffleVersion : null)) {
 sendError(ctx, "Incompatible shuffle request version", BAD_REQUEST);
   }

Review comment:
   Simplified.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635123)
Time Spent: 2h 40m  (was: 2.5h)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684203696



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -920,31 +1002,50 @@ public void channelOpen(ChannelHandlerContext ctx, 
ChannelStateEvent evt)
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}. Accepted number of connections={}",
+ctx.channel(), acceptedConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  LOG.trace("Executing channelInactive");
+  super.channelInactive(ctx);
+  acceptedConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}",
+  acceptedConnections.get());
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
+  LOG.trace("Executing channelRead");
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}", request);
+  if (request.method() != GET) {
   sendError(ctx, METHOD_NOT_ALLOWED);
   return;
   }
   // Check whether the shuffle version is compatible
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  if (request.headers() != null) {
+shuffleVersion = request.headers()
+.get(ShuffleHeader.HTTP_HEADER_VERSION);
+  }
+  LOG.debug("Shuffle version: {}", shuffleVersion);
   if (!ShuffleHeader.DEFAULT_HTTP_HEADER_NAME.equals(
   request.headers() != null ?
   request.headers().get(ShuffleHeader.HTTP_HEADER_NAME) : null)
   || !ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION.equals(
   request.headers() != null ?
-  request.headers()
-  .get(ShuffleHeader.HTTP_HEADER_VERSION) : null)) {
+  shuffleVersion : null)) {
 sendError(ctx, "Incompatible shuffle request version", BAD_REQUEST);
   }

Review comment:
   Simplified.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635111
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 12:29
Start Date: 06/Aug/21 12:29
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684196186



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -920,31 +1002,50 @@ public void channelOpen(ChannelHandlerContext ctx, 
ChannelStateEvent evt)
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}. Accepted number of connections={}",
+ctx.channel(), acceptedConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  LOG.trace("Executing channelInactive");
+  super.channelInactive(ctx);
+  acceptedConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}",
+  acceptedConnections.get());
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
+  LOG.trace("Executing channelRead");
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}", request);
+  if (request.method() != GET) {
   sendError(ctx, METHOD_NOT_ALLOWED);
   return;
   }
   // Check whether the shuffle version is compatible
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  if (request.headers() != null) {
+shuffleVersion = request.headers()
+.get(ShuffleHeader.HTTP_HEADER_VERSION);
+  }
+  LOG.debug("Shuffle version: {}", shuffleVersion);

Review comment:
   Thanks, makes sense. Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635111)
Time Spent: 2.5h  (was: 2h 20m)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3259: HADOOP-15327. Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread GitBox


szilard-nemeth commented on a change in pull request #3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684196186



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -920,31 +1002,50 @@ public void channelOpen(ChannelHandlerContext ctx, 
ChannelStateEvent evt)
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}. Accepted number of connections={}",
+ctx.channel(), acceptedConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  LOG.trace("Executing channelInactive");
+  super.channelInactive(ctx);
+  acceptedConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}",
+  acceptedConnections.get());
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
+  LOG.trace("Executing channelRead");
+  HttpRequest request = (HttpRequest) msg;
+  LOG.debug("Received HTTP request: {}", request);
+  if (request.method() != GET) {
   sendError(ctx, METHOD_NOT_ALLOWED);
   return;
   }
   // Check whether the shuffle version is compatible
+  String shuffleVersion = ShuffleHeader.DEFAULT_HTTP_HEADER_VERSION;
+  if (request.headers() != null) {
+shuffleVersion = request.headers()
+.get(ShuffleHeader.HTTP_HEADER_VERSION);
+  }
+  LOG.debug("Shuffle version: {}", shuffleVersion);

Review comment:
   Thanks, makes sense. Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?focusedWorklogId=635108=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-635108
 ]

ASF GitHub Bot logged work on HADOOP-15327:
---

Author: ASF GitHub Bot
Created on: 06/Aug/21 12:27
Start Date: 06/Aug/21 12:27
Worklog Time Spent: 10m 
  Work Description: szilard-nemeth commented on a change in pull request 
#3259:
URL: https://github.com/apache/hadoop/pull/3259#discussion_r684195058



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleHandler.java
##
@@ -920,31 +1002,50 @@ public void channelOpen(ChannelHandlerContext ctx, 
ChannelStateEvent evt)
 // fetch failure.
 headers.put(RETRY_AFTER_HEADER, String.valueOf(FETCH_RETRY_DELAY));
 sendError(ctx, "", TOO_MANY_REQ_STATUS, headers);
-return;
+  } else {
+super.channelActive(ctx);
+accepted.add(ctx.channel());
+LOG.debug("Added channel: {}. Accepted number of connections={}",
+ctx.channel(), acceptedConnections.get());
   }
-  accepted.add(evt.getChannel());
 }
 
 @Override
-public void messageReceived(ChannelHandlerContext ctx, MessageEvent evt)
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+  LOG.trace("Executing channelInactive");
+  super.channelInactive(ctx);
+  acceptedConnections.decrementAndGet();
+  LOG.debug("New value of Accepted number of connections={}",
+  acceptedConnections.get());
+}
+
+@Override
+public void channelRead(ChannelHandlerContext ctx, Object msg)
 throws Exception {
-  HttpRequest request = (HttpRequest) evt.getMessage();
-  if (request.getMethod() != GET) {
+  LOG.trace("Executing channelRead");

Review comment:
   Fixed this as well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 635108)
Time Spent: 2h 20m  (was: 2h 10m)

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   >