[GitHub] [hadoop] tomscut commented on pull request #3107: HDFS-16074. Remove an expensive debug string concatenation

2021-06-15 Thread GitBox


tomscut commented on pull request #3107:
URL: https://github.com/apache/hadoop/pull/3107#issuecomment-862101992


   LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17724) Add Dockerfile for Debian 10

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17724?focusedWorklogId=611751&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611751
 ]

ASF GitHub Bot logged work on HADOOP-17724:
---

Author: ASF GitHub Bot
Created on: 16/Jun/21 06:42
Start Date: 16/Jun/21 06:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3038:
URL: https://github.com/apache/hadoop/pull/3038#issuecomment-862092602


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  23m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 49s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  18m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  76m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3038/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3038 |
   | Optional Tests | dupname asflicense codespell hadolint shellcheck 
shelldocs |
   | uname | Linux 5e9cadd7b3eb 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25813e19bd8f531e1d919ee8093399346aae7bf3 |
   | Max. process+thread count | 520 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3038/3/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 611751)
Time Spent: 1.5h  (was: 1h 20m)

> Add Dockerfile for Debian 10
> 
>
> Key: HADOOP-17724
> URL: https://issues.apache.org/jira/browse/HADOOP-17724
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Adding a Dockerfile for building on Debian 10 since there are a lot of users 
> in the community using this distro.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang opened a new pull request #3107: HDFS-16074. Remove an expensive debug string concatenation

2021-06-15 Thread GitBox


jojochuang opened a new pull request #3107:
URL: https://github.com/apache/hadoop/pull/3107


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3038: HADOOP-17724. Add Dockerfile for Debian 10

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #3038:
URL: https://github.com/apache/hadoop/pull/3038#issuecomment-862092602


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  23m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 49s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  hadolint  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  18m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  76m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3038/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3038 |
   | Optional Tests | dupname asflicense codespell hadolint shellcheck 
shelldocs |
   | uname | Linux 5e9cadd7b3eb 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25813e19bd8f531e1d919ee8093399346aae7bf3 |
   | Max. process+thread count | 520 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3038/3/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3100: HDFS-16065. RBF: Add metrics to record Router's operations

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #3100:
URL: https://github.com/apache/hadoop/pull/3100#issuecomment-862067981


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 54s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3100/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 568 new + 0 
unchanged - 0 fixed = 568 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   1m 20s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3100/4/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  26m  2s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3100/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
   |  |  Dead store to jm in 
org.apache.hadoop.hdfs.server.federation.router.RouterClientMetrics.create(Configuration)
  At 
RouterClientMetrics.java:org.apache.hadoop.hdfs.server.federation.router.RouterClientMetrics.create(Configuration)
  At RouterClientMetrics.java:[line 197] |
   | Failed junit tests | 
hadoop.hdfs.server.federation.security.TestRouterSecurityManager |
   |   | hadoop.hdfs.server.federation.metrics.TestMetricsBase |
   |   | hadoop.hdfs.server.federation.metrics.TestRBFMetrics |
   |   | hadoop.hdfs.server.federation.router.TestRouterAdminCLI |
   |   | hadoop.hdfs.server.federation.router.TestRouterSafemode |
   |   | hadoop.hdfs.server.federation.router.TestRouter |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache

[jira] [Commented] (HADOOP-17763) DistCp job fails when AM is killed

2021-06-15 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17364072#comment-17364072
 ] 

Bilwa S T commented on HADOOP-17763:


Tasks fails as we use staging directory to store split files and this same 
directory gets deleted whenever AM relaunches. So we should avoid storing split 
files in staging directory like other mapreduce applications.

> DistCp job fails when AM is killed
> --
>
> Key: HADOOP-17763
> URL: https://issues.apache.org/jira/browse/HADOOP-17763
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> Job fails as tasks fail with below exception
> {code:java}
> 2021-06-11 18:48:47,047 | ERROR | IPC Server handler 0 on 27101 | Task: 
> attempt_1623387358383_0006_m_00_1000 - exited : 
> java.io.FileNotFoundException: File does not exist: 
> hdfs://hacluster/staging-dir/dsperf/.staging/_distcp-646531269/fileList.seq
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1637)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1630)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1645)
>  at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1863)
>  at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1886)
>  at 
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:54)
>  at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:560)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:798)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$1.run(YarnChild.java:183)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:177)
>  | TaskAttemptListenerImpl.java:304{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17763) DistCp job fails when AM is killed

2021-06-15 Thread Bilwa S T (Jira)
Bilwa S T created HADOOP-17763:
--

 Summary: DistCp job fails when AM is killed
 Key: HADOOP-17763
 URL: https://issues.apache.org/jira/browse/HADOOP-17763
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bilwa S T
Assignee: Bilwa S T


Job fails as tasks fail with below exception
{code:java}
2021-06-11 18:48:47,047 | ERROR | IPC Server handler 0 on 27101 | Task: 
attempt_1623387358383_0006_m_00_1000 - exited : 
java.io.FileNotFoundException: File does not exist: 
hdfs://hacluster/staging-dir/dsperf/.staging/_distcp-646531269/fileList.seq
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1637)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1630)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1645)
 at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1863)
 at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1886)
 at 
org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:54)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:560)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:798)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$1.run(YarnChild.java:183)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:177)
 | TaskAttemptListenerImpl.java:304{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17724) Add Dockerfile for Debian 10

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17724?focusedWorklogId=611743&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611743
 ]

ASF GitHub Bot logged work on HADOOP-17724:
---

Author: ASF GitHub Bot
Created on: 16/Jun/21 05:27
Start Date: 16/Jun/21 05:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3038:
URL: https://github.com/apache/hadoop/pull/3038#issuecomment-862050975


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3038/3/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 611743)
Time Spent: 1h 20m  (was: 1h 10m)

> Add Dockerfile for Debian 10
> 
>
> Key: HADOOP-17724
> URL: https://issues.apache.org/jira/browse/HADOOP-17724
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Adding a Dockerfile for building on Debian 10 since there are a lot of users 
> in the community using this distro.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3038: HADOOP-17724. Add Dockerfile for Debian 10

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #3038:
URL: https://github.com/apache/hadoop/pull/3038#issuecomment-862050975


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3038/3/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-15 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17364061#comment-17364061
 ] 

Takanobu Asanuma commented on HADOOP-17762:
---

Thanks for reporting it, [~ahussein].

I tried to mvn build with the latest branch-2.10 in my local environment, and 
it succeeded. From the qbt results, QA has failed since 6/11. But there were no 
commits a few days before and after that. I think there is something wrong with 
the Jenkins server.

> branch-2.10 daily build fails to pull latest changes
> 
>
> Key: HADOOP-17762
> URL: https://issues.apache.org/jira/browse/HADOOP-17762
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, yetus
>Affects Versions: 2.10.1
>Reporter: Ahmed Hussein
>Priority: Major
>
> I noticed that the build for branch-2.10 failed to pull the latest changes 
> for the last few days.
> CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]
> https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console
> {code:bash}
> Started by timer
> Running as SYSTEM
> Building remotely on H20 (Hadoop) in workspace 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
> The recommended git tool is: NONE
> No credentials specified
> Cloning the remote Git repository
> Using shallow clone with depth 10
> Avoid fetching tags
> Cloning repository https://github.com/apache/hadoop
> ERROR: Failed to clean the workspace
> jenkins.util.io.CompositeIOException: Unable to delete 
> '/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
>  Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. 
> (Discarded 1 additional exceptions)
>   at 
> jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
>   at hudson.Util.deleteContentsRecursive(Util.java:262)
>   at hudson.Util.deleteContentsRecursive(Util.java:251)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:211)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>   at hudson.remoting.Request$2.run(Request.java:375)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   Suppressed: java.nio.file.AccessDeniedException: 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:457)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover

[jira] [Commented] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-15 Thread Xuesen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17364013#comment-17364013
 ] 

Xuesen Liang commented on HADOOP-17749:
---

Performance test: 
[https://github.com/apache/hadoop/pull/3080#issuecomment-861476259]

 

> Remove lock contention in SelectorPool of SocketIOWithTimeout
> -
>
> Key: HADOOP-17749
> URL: https://issues.apache.org/jira/browse/HADOOP-17749
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xuesen Liang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> *SelectorPool* in 
> hadoop-common/src/main/java/org/apache/hadoop/net/*SocketIOWithTimeout.java* 
> is a point of lock contention.
> For example: 
> {code:java}
> $ grep 'waiting to lock <0x7f7d94006d90>' 63692.jstack | uniq -c
>  1005 - waiting to lock <0x7f7d94006d90> (a 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool)
> {code}
> and the thread stack is as follows:
> {code:java}
> "IPC Client (324579982) connection to /100.10.6.10:60020 from user_00" #14139 
> daemon prio=5 os_prio=0 tid=0x7f7374039000 nid=0x85cc waiting for monitor 
> entry [0x7f6f45939000]
>  java.lang.Thread.State: BLOCKED (on object monitor)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:390)
>  - waiting to lock <0x7f7d94006d90> (a 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>  at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
>  - locked <0x7fa818caf258> (a java.io.BufferedInputStream)
>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.readResponse(RpcClientImpl.java:967)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:568)
> {code}
> We should remove the lock contention.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-06-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363928#comment-17363928
 ] 

Hadoop QA commented on HADOOP-15327:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
28s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 3 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
34s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
57s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 7s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
5s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 42s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 36m  
7s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
35s{color} | {color:blue}{color} | {color:blue} branch/hadoop-project no 
spotbugs output file (spotbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
47s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
44s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
6m  1s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/200/artifact/out/diff-checkstyle-root.txt{color}
 | {color:orange} root: The patch generated 116 new + 83 unchanged - 7 fixed = 
199 total (was 90) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/200/artifact/ou

[GitHub] [hadoop] kihwal commented on pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2021-06-15 Thread GitBox


kihwal commented on pull request #3065:
URL: https://github.com/apache/hadoop/pull/3065#issuecomment-861825275


   I think the patch looks good except for the things that were already pointed 
out by others. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] kihwal commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2021-06-15 Thread GitBox


kihwal commented on a change in pull request #3065:
URL: https://github.com/apache/hadoop/pull/3065#discussion_r652143134



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -3220,21 +3173,28 @@ private void reportDiffSortedInner(
 // comes from the IBR / FBR and hence what we should use to compare
 // against the memory state.
 // See HDFS-6289 and HDFS-15422 for more context.
-queueReportedBlock(storageInfo, replica, reportedState,
+queueReportedBlock(storageInfo, storedBlock, reportedState,

Review comment:
   That's right. This will undo the fix.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#issuecomment-861702037


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 230m 24s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 315m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3105 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 21d8dd3a42bb 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 417e6e97aba14db285cb401a87a888018684946b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/2/testReport/ |
   | Max. process+thread count | 3346 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to

[jira] [Created] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-15 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17762:
--

 Summary: branch-2.10 daily build fails to pull latest changes
 Key: HADOOP-17762
 URL: https://issues.apache.org/jira/browse/HADOOP-17762
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, yetus
Affects Versions: 2.10.1
Reporter: Ahmed Hussein


I noticed that the build for branch-2.10 failed to pull the latest changes for 
the last few days.

CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]

https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console

{code:bash}
Started by timer
Running as SYSTEM
Building remotely on H20 (Hadoop) in workspace 
/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
The recommended git tool is: NONE
No credentials specified
Cloning the remote Git repository
Using shallow clone with depth 10
Avoid fetching tags
Cloning repository https://github.com/apache/hadoop
ERROR: Failed to clean the workspace
jenkins.util.io.CompositeIOException: Unable to delete 
'/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
 Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 
1 additional exceptions)
at 
jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
at hudson.Util.deleteContentsRecursive(Util.java:262)
at hudson.Util.deleteContentsRecursive(Util.java:251)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
at hudson.remoting.UserRequest.perform(UserRequest.java:211)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:375)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.file.AccessDeniedException: 
/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:457)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemo

[GitHub] [hadoop] goiri commented on a change in pull request #3100: HDFS-16065. RBF: Add metrics to record Router's operations

2021-06-15 Thread GitBox


goiri commented on a change in pull request #3100:
URL: https://github.com/apache/hadoop/pull/3100#discussion_r651989668



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##
@@ -471,6 +471,9 @@ private Object invokeMethod(
 if (this.rpcMonitor != null) {
   this.rpcMonitor.proxyOpComplete(true);
 }
+if (this.router.getRouterMetrics() != null) {
+  this.router.getRouterMetrics().incInvokedMethod(method);

Review comment:
   I think it would be better.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


goiri commented on a change in pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#discussion_r651985111



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBusyIODataNode.java
##
@@ -0,0 +1,221 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode;
+
+import static org.mockito.Mockito.atLeastOnce;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.lang.reflect.Modifier;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
+import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.namenode.INodeFile;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestBusyIODataNode {
+
+  public static final Logger LOG = LoggerFactory.getLogger(TestBusyIODataNode
+  .class);
+
+  private MiniDFSCluster cluster;
+  private Configuration conf;
+  private FSNamesystem fsn;
+  private BlockManager bm;
+
+  static final long SEED = 0xDEADBEEFL;
+  static final int BLOCK_SIZE = 8192;
+  private static final int HEARTBEAT_INTERVAL = 1;
+
+  private final Path dir = new Path("/" + this.getClass().getSimpleName());
+
+  @Before
+  public void setUp() throws Exception {
+conf = new HdfsConfiguration();
+conf.setTimeDuration(
+DFSConfigKeys.DFS_DATANODE_DISK_CHECK_MIN_GAP_KEY,
+0, TimeUnit.MILLISECONDS);
+conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, 1);
+conf.setInt(
+DFSConfigKeys.DFS_NAMENODE_RECONSTRUCTION_PENDING_TIMEOUT_SEC_KEY,
+1);
+conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
+conf.setInt(DFSConfigKeys.DFS_BLOCKREPORT_INTERVAL_MSEC_KEY, 1000);
+conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, HEARTBEAT_INTERVAL);
+cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+cluster.waitActive();
+fsn = cluster.getNamesystem();
+bm = fsn.getBlockManager();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+if (cluster != null) {
+  cluster.shutdown();
+  cluster = null;
+}
+  }
+
+  static protected void writeFile(FileSystem fileSys, Path name, int repl)
+  throws IOException {
+writeFile(fileSys, name, repl, 2);
+  }
+
+  static protected void writeFile(FileSystem fileSys, Path name, int repl,

Review comment:
   protected static void writeFile() and the same for the other methods.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
##
@@ -2646,6 +2655,7 @@ public void run() {
   } catch (Throwable t) {
 LOG.error("Failed to transfer block {}", b, t);
   } finally {
+transferringBlock.remove(b);

Review comment:
   Are we sure we are cleaning this up and won't leave garbage?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBusyIODataNode.java
##
@@ -0,0 +1,221 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional inf

[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-06-15 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated HADOOP-15327:

Attachment: HADOOP-15327.004.patch

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log
>
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] symious commented on a change in pull request #3100: HDFS-16065. RBF: Add metrics to record Router's operations

2021-06-15 Thread GitBox


symious commented on a change in pull request #3100:
URL: https://github.com/apache/hadoop/pull/3100#discussion_r651845997



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##
@@ -471,6 +471,9 @@ private Object invokeMethod(
 if (this.rpcMonitor != null) {
   this.rpcMonitor.proxyOpComplete(true);
 }
+if (this.router.getRouterMetrics() != null) {
+  this.router.getRouterMetrics().incInvokedMethod(method);

Review comment:
   The metrics name is "RouterActivity", seems a little related. Or do we 
need to create a new one named "RouterClientActivity"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2021-06-15 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363662#comment-17363662
 ] 

Arpit Agarwal commented on HADOOP-11890:


Thanks [~hemanthboyina]. What do you see as the next steps? Do you want to 
propose merging your rebased changes on Apache trunk?

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Major
>  Labels: ipv6
> Attachments: hadoop_2.7.3_ipv6_commits.txt
>
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bogthe commented on pull request #3101: S3/hadoop 17139 enable copy from local

2021-06-15 Thread GitBox


bogthe commented on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-861506771


   Thank you for having a look, I really like those suggestions, I'll update 
this PR and I'll throw in the changes to `filesystem.md` too! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17749?focusedWorklogId=611297&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611297
 ]

ASF GitHub Bot logged work on HADOOP-17749:
---

Author: ASF GitHub Bot
Created on: 15/Jun/21 12:59
Start Date: 15/Jun/21 12:59
Worklog Time Spent: 10m 
  Work Description: liangxs commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-861476259


   
   I tested the performance of the trunk version and the optimized version.
   
   ### Test Case
   
   The test steps are as follow:
   
   1. Start a netty-echo-server with 100 service ports.
   ```
   private static void startServer() throws Exception {
 ChannelHandler serverHandler = new EchoHandler();
 for (int i = 0; i < 100; ++i) {
   ServerBootstrap b = new ServerBootstrap();
   b.group(new NioEventLoopGroup(1), new NioEventLoopGroup(2))
   .channel(NioServerSocketChannel.class)
   .option(ChannelOption.SO_BACKLOG, 512)
   .childOption(ChannelOption.SO_TIMEOUT, timeout)
   .childOption(ChannelOption.TCP_NODELAY, true)
   .childHandler(serverHandler);
   ChannelFuture f = b.bind(host, port + i).sync();
 }
 Thread.sleep(Integer.MAX_VALUE);
   }
   ```
   
   2. Start a hadoop socket client with multi-threads.
   These threads connect to the netty-echo-server's ports by a round robin 
manner.
   ```
   private static void startClient(int threadCnt) throws Exception {
 SocketFactory factory = new StandardSocketFactory();
 Thread[] tArray = new Thread[threadCnt];
 CountDownLatch latch = new CountDownLatch(threadCnt);
 for (int i = 0; i < threadCnt; ++i) {
   final int curPort = port + (i % 100);  // round robin
   Thread t = new Thread(() -> {
 try {
   Socket socket = factory.createSocket();
   socket.setTcpNoDelay(true);
   socket.setKeepAlive(false);
   NetUtils.connect(socket, new java.net.InetSocketAddress(host, 
curPort), timeout);
   socket.setSoTimeout(timeout);
   ...
   ...
   ```
   
   3. Each client thread send-to/recv-from its corresponding netty-server port 
1024 times, with 256-byte data each time.
   ```
 InputStream inStream = NetUtils.getInputStream(socket);
 OutputStream outStream = NetUtils.getOutputStream(socket, timeout);
 DataInputStream in = new DataInputStream(new 
BufferedInputStream(inStream));
 DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(outStream));
   
 byte[] buf = new byte[256];
 for (int j = 0; j < 1024; ++j) {
   out.write(buf);
   out.flush();
   in.readFully(buf);
 }
   ```
   
   4. print the total cost.
   
   
   Code project:  
[https://github.com/liangxs/test-HADOOP-17749](https://github.com/liangxs/test-HADOOP-17749)
   
   
   
   ### Test Result
   
   The test result is as follow:
   
   ```
   | client thread count | 100 | 200 | 400  | 800  | 1200 | 1600 | 2000 | 2400 
| 2800 |
   
|-|-|-|--|--|--|--|--|--|--|
   | trunk (millis)  | 351 | 609 | 1058 | 2024 | 2907 | 3882 | 5062 | 5675 
| 7117 |
   | optimized (millis)  | 253 | 438 |  799 | 1167 | 1561 | 2422 | 2784 | 2813 
| 3371 |
   | improved| 38% | 39% |  32% |  70% |  86% |  60% |  82% | 102% 
| 111% |
   ```
   
   
   
   
   ps, I used two test machines with same hardware and software configuration 
and on a same rack:
   
   ```
   $ lscpu
   Architecture:  x86_64
   CPU(s):56
   On-line CPU(s) list:   0-55
   Thread(s) per core:2
   Core(s) per socket:14
   Socket(s): 2
   NUMA node(s):  2
   Model name:Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
   CPU MHz:   2401.000
   
   $ free -g
 totalusedfree  shared  buff/cache   
available
   Mem: 62   7  37   0  17  
54
   
   $ lspci | grep Ethernet
   06:00.0 Ethernet controller: Intel Corporation Ethernet Controller 
10-Gigabit X540-AT2 (rev 01)
   
   $ uname -r
   3.10.107-1
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 611297)
Time Spent: 1h  (was: 50m)

> Remove lock contention in SelectorPool of SocketIOWithTimeout
> --

[GitHub] [hadoop] liangxs commented on pull request #3080: HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-15 Thread GitBox


liangxs commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-861476259


   
   I tested the performance of the trunk version and the optimized version.
   
   ### Test Case
   
   The test steps are as follow:
   
   1. Start a netty-echo-server with 100 service ports.
   ```
   private static void startServer() throws Exception {
 ChannelHandler serverHandler = new EchoHandler();
 for (int i = 0; i < 100; ++i) {
   ServerBootstrap b = new ServerBootstrap();
   b.group(new NioEventLoopGroup(1), new NioEventLoopGroup(2))
   .channel(NioServerSocketChannel.class)
   .option(ChannelOption.SO_BACKLOG, 512)
   .childOption(ChannelOption.SO_TIMEOUT, timeout)
   .childOption(ChannelOption.TCP_NODELAY, true)
   .childHandler(serverHandler);
   ChannelFuture f = b.bind(host, port + i).sync();
 }
 Thread.sleep(Integer.MAX_VALUE);
   }
   ```
   
   2. Start a hadoop socket client with multi-threads.
   These threads connect to the netty-echo-server's ports by a round robin 
manner.
   ```
   private static void startClient(int threadCnt) throws Exception {
 SocketFactory factory = new StandardSocketFactory();
 Thread[] tArray = new Thread[threadCnt];
 CountDownLatch latch = new CountDownLatch(threadCnt);
 for (int i = 0; i < threadCnt; ++i) {
   final int curPort = port + (i % 100);  // round robin
   Thread t = new Thread(() -> {
 try {
   Socket socket = factory.createSocket();
   socket.setTcpNoDelay(true);
   socket.setKeepAlive(false);
   NetUtils.connect(socket, new java.net.InetSocketAddress(host, 
curPort), timeout);
   socket.setSoTimeout(timeout);
   ...
   ...
   ```
   
   3. Each client thread send-to/recv-from its corresponding netty-server port 
1024 times, with 256-byte data each time.
   ```
 InputStream inStream = NetUtils.getInputStream(socket);
 OutputStream outStream = NetUtils.getOutputStream(socket, timeout);
 DataInputStream in = new DataInputStream(new 
BufferedInputStream(inStream));
 DataOutputStream out = new DataOutputStream(new 
BufferedOutputStream(outStream));
   
 byte[] buf = new byte[256];
 for (int j = 0; j < 1024; ++j) {
   out.write(buf);
   out.flush();
   in.readFully(buf);
 }
   ```
   
   4. print the total cost.
   
   
   Code project:  
[https://github.com/liangxs/test-HADOOP-17749](https://github.com/liangxs/test-HADOOP-17749)
   
   
   
   ### Test Result
   
   The test result is as follow:
   
   ```
   | client thread count | 100 | 200 | 400  | 800  | 1200 | 1600 | 2000 | 2400 
| 2800 |
   
|-|-|-|--|--|--|--|--|--|--|
   | trunk (millis)  | 351 | 609 | 1058 | 2024 | 2907 | 3882 | 5062 | 5675 
| 7117 |
   | optimized (millis)  | 253 | 438 |  799 | 1167 | 1561 | 2422 | 2784 | 2813 
| 3371 |
   | improved| 38% | 39% |  32% |  70% |  86% |  60% |  82% | 102% 
| 111% |
   ```
   
   
   
   
   ps, I used two test machines with same hardware and software configuration 
and on a same rack:
   
   ```
   $ lscpu
   Architecture:  x86_64
   CPU(s):56
   On-line CPU(s) list:   0-55
   Thread(s) per core:2
   Core(s) per socket:14
   Socket(s): 2
   NUMA node(s):  2
   Model name:Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
   CPU MHz:   2401.000
   
   $ free -g
 totalusedfree  shared  buff/cache   
available
   Mem: 62   7  37   0  17  
54
   
   $ lspci | grep Ethernet
   06:00.0 Ethernet controller: Intel Corporation Ethernet Controller 
10-Gigabit X540-AT2 (rev 01)
   
   $ uname -r
   3.10.107-1
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17347) ABFS: Optimise read for small files/tails of files

2021-06-15 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363617#comment-17363617
 ] 

Mukund Thakur commented on HADOOP-17347:


I am really having hard time understanding changes in this PR especially the 
newly added tests. A bit of documentation would really been helpful. 

> ABFS: Optimise read for small files/tails of files
> --
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu edited a comment on pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


zhengchenyu edited a comment on pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#issuecomment-861146706


   @ayushtkn @goiri  can you help me review this PR? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2021-06-15 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363591#comment-17363591
 ] 

Hemanth Boyina commented on HADOOP-11890:
-

thanks for the ping [~arp], sorry for the late response

Yes we have tried these changes , the subtasks under this Jira were written on 
top of Branch-2.7.3, so we have manually rebased all these subtasks on top of 
Trunk and deployed the cluster, but there is an issue while parsing IPV6 
address in NetUtils#createSocketAddr . NetUtils#createSocketAddr is common Util 
which parses IP address and creates an INetSocketAddress which is used all over 
the Hadoop, So we have modified this Util to support IPv6 address 

In HADOOP-17542 we have attached the test scenarios which we have verified on a 
successful deployment of Hadoop with IPV6

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Major
>  Labels: ipv6
> Attachments: hadoop_2.7.3_ipv6_commits.txt
>
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu edited a comment on pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


zhengchenyu edited a comment on pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#issuecomment-861146706


   @ayushtkn @goiri  can you help me review this PR? 
   I have fix asflicense and checkstyle. But I don't know why I can't "reopen 
and comment" this PR? Can help me reopen this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=611262&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611262
 ]

ASF GitHub Bot logged work on HADOOP-15566:
---

Author: ASF GitHub Bot
Created on: 15/Jun/21 11:27
Start Date: 15/Jun/21 11:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2816:
URL: https://github.com/apache/hadoop/pull/2816#issuecomment-861418213


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m  6s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  22m  7s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1982 unchanged - 1 
fixed = 1983 total (was 1983)  |
   | +1 :green_heart: |  compile  |  19m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 56s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 1859 
unchanged - 1 fixed = 1860 total (was 1860)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 58s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 22 new + 4 unchanged - 1 fixed = 26 total (was 5) 
 |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 32s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  16m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 3

[GitHub] [hadoop] hadoop-yetus commented on pull request #2816: HADOOP-15566 initial changes for opentelemetry - WIP

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #2816:
URL: https://github.com/apache/hadoop/pull/2816#issuecomment-861418213


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  15m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m  6s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  22m  7s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1982 unchanged - 1 
fixed = 1983 total (was 1983)  |
   | +1 :green_heart: |  compile  |  19m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 56s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 1859 
unchanged - 1 fixed = 1860 total (was 1860)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 58s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 22 new + 4 unchanged - 1 fixed = 26 total (was 5) 
 |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 32s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  16m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m  6s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 200m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hado

[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=611261&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611261
 ]

ASF GitHub Bot logged work on HADOOP-17596:
---

Author: ASF GitHub Bot
Created on: 15/Jun/21 11:24
Start Date: 15/Jun/21 11:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3106:
URL: https://github.com/apache/hadoop/pull/3106#issuecomment-861416575


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | -1 :x: |  mvninstall  |  30m 30s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/artifact/out/branch-mvninstall-root.txt)
 |  root in branch-3.3 failed.  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  73m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3106 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 9bd7642b85c1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / e888bb3e01e0695dd02f08986f478897f371414b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/testReport/ |
   | Max. process+thread count | 617 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 611261)
Time Spent: 4h 10m  (was: 4h)

> ABFS: Change default Readahead Queue Depth from num(processors) to const
> 
>
> Key: HADOOP-17596
> URL: https://issues.apache.org/jira/browse/HADOOP-17596
> Project: Hadoop Common
>   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3106: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #3106:
URL: https://github.com/apache/hadoop/pull/3106#issuecomment-861416575


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | -1 :x: |  mvninstall  |  30m 30s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/artifact/out/branch-mvninstall-root.txt)
 |  root in branch-3.3 failed.  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  73m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3106 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 9bd7642b85c1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / e888bb3e01e0695dd02f08986f478897f371414b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/testReport/ |
   | Max. process+thread count | 617 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3106/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu removed a comment on pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


zhengchenyu removed a comment on pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#issuecomment-861412323


   fix checkstyle and asflicense


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


zhengchenyu commented on pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#issuecomment-861412323


   fix checkstyle and asflicense


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu closed pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


zhengchenyu closed pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=611249&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611249
 ]

ASF GitHub Bot logged work on HADOOP-15566:
---

Author: ASF GitHub Bot
Created on: 15/Jun/21 11:08
Start Date: 15/Jun/21 11:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2816:
URL: https://github.com/apache/hadoop/pull/2816#issuecomment-861407165


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 46s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1986 unchanged - 1 
fixed = 1987 total (was 1987)  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 56s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/8/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 22 new + 4 unchanged - 1 fixed = 26 total (was 5) 
 |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 38s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  17m 51s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 193m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache

[GitHub] [hadoop] hadoop-yetus commented on pull request #2816: HADOOP-15566 initial changes for opentelemetry - WIP

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #2816:
URL: https://github.com/apache/hadoop/pull/2816#issuecomment-861407165


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  16m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 46s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1986 unchanged - 1 
fixed = 1987 total (was 1987)  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 56s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/8/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 22 new + 4 unchanged - 1 fixed = 26 total (was 5) 
 |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 38s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  17m 51s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 193m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2816 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml shellcheck shelldocs spotbugs 
checkstyle |
   | uname | Linux ba1cc0213484 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build too

[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=611228&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611228
 ]

ASF GitHub Bot logged work on HADOOP-17596:
---

Author: ASF GitHub Bot
Created on: 15/Jun/21 10:09
Start Date: 15/Jun/21 10:09
Worklog Time Spent: 10m 
  Work Description: sumangala-patki opened a new pull request #3106:
URL: https://github.com/apache/hadoop/pull/3106


   Contributed by Sumangala Patki.
   
   (cherry picked from commit 76d92eb2a22c71b5fcde88a9b4d2faec81a1cb9f)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 611228)
Time Spent: 4h  (was: 3h 50m)

> ABFS: Change default Readahead Queue Depth from num(processors) to const
> 
>
> Key: HADOOP-17596
> URL: https://issues.apache.org/jira/browse/HADOOP-17596
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The default value of readahead queue depth is currently set to the number of 
> available processors. However, this can result in one inputstream instance 
> consuming more processor time. To ensure equal thread allocation during read 
> for all inputstreams created in a session, we change the default readahead 
> queue depth to a constant (2).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sumangala-patki opened a new pull request #3106: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-06-15 Thread GitBox


sumangala-patki opened a new pull request #3106:
URL: https://github.com/apache/hadoop/pull/3106


   Contributed by Sumangala Patki.
   
   (cherry picked from commit 76d92eb2a22c71b5fcde88a9b4d2faec81a1cb9f)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2021-06-15 Thread GitBox


sodonnel commented on a change in pull request #3065:
URL: https://github.com/apache/hadoop/pull/3065#discussion_r651611235



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -3220,21 +3173,28 @@ private void reportDiffSortedInner(
 // comes from the IBR / FBR and hence what we should use to compare
 // against the memory state.
 // See HDFS-6289 and HDFS-15422 for more context.
-queueReportedBlock(storageInfo, replica, reportedState,
+queueReportedBlock(storageInfo, storedBlock, reportedState,

Review comment:
   I this this change is incorrect - we should be queuing the details 
reported in the IBR, which I think is "replica" here. This change was made in 
HDFS-15422.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3105: HDFS-16070. DataTransfer block storm when datanode's io is busy.

2021-06-15 Thread GitBox


hadoop-yetus commented on pull request #3105:
URL: https://github.com/apache/hadoop/pull/3105#issuecomment-861320025


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 149 unchanged 
- 0 fixed = 152 total (was 149)  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 231m 22s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  asflicense  |   0m 47s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 315m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3105 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a54436f5e3d2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 06790eacf1e3d2ebd2ff245fedda54e361431937 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/1/testReport/ |
   | Max. process+thread count | 3015 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3105/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.

[GitHub] [hadoop] tasanuma commented on pull request #3104: HDFS-16068. WebHdfsFileSystem has a possible connection leak in connection with HttpFS

2021-06-15 Thread GitBox


tasanuma commented on pull request #3104:
URL: https://github.com/apache/hadoop/pull/3104#issuecomment-861292413


   Thanks for your reviews and commits, @hemanthboyina and @tomscut. I will 
cherry-pick to lower branches.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress

2021-06-15 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363447#comment-17363447
 ] 

Hemanth Boyina commented on HADOOP-17542:
-

[~gb.ana...@gmail.com] can you please raise it as a  github PR

> IPV6 support in Netutils#createSocketAddress 
> -
>
> Key: HADOOP-17542
> URL: https://issues.apache.org/jira/browse/HADOOP-17542
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: ANANDA G B
>Priority: Minor
>  Labels: ipv6
> Attachments: HADOOP-17542-HADOOP-11890-001.patch, Test Scenarios 
> Verified in IPV6 cluster.doc
>
>
> Currently NetUtils#createSocketAddress not supporting if target is IPV6 ip. 
> If target is IPV6 ip then it throw "Does not contain a valid host:port 
> authority: ".
> This need be support.
> public static InetSocketAddress createSocketAddr(String target,
>  int defaultPort,
>  String configName,
>  boolean useCacheIfPresent) {
>  String helpText = "";
>  if (configName != null)
> { helpText = " (configuration property '" + configName + "')"; }
> if (target == null)
> { throw new IllegalArgumentException("Target address cannot be null." + 
> helpText); }
> target = target.trim();
>  boolean hasScheme = target.contains("://");
>  URI uri = createURI(target, hasScheme, helpText, useCacheIfPresent);
> String host = uri.getHost();
>  int port = uri.getPort();
>  if (port == -1)
> { port = defaultPort; }
> String path = uri.getPath();
> if ((host == null) || (port < 0) ||
>  (!hasScheme && path != null && !path.isEmpty()))
> { throw new IllegalArgumentException( *"Does not contain a valid host:port 
> authority: " + target + helpText* ); }
> return createSocketAddrForHost(host, port);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hemanthboyina commented on pull request #3104: HDFS-16068. WebHdfsFileSystem has a possible connection leak in connection with HttpFS

2021-06-15 Thread GitBox


hemanthboyina commented on pull request #3104:
URL: https://github.com/apache/hadoop/pull/3104#issuecomment-861272730


   Committed to trunk , thanks for contribution @tasanuma , thanks for the 
review @tomscut 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hemanthboyina merged pull request #3104: HDFS-16068. WebHdfsFileSystem has a possible connection leak in connection with HttpFS

2021-06-15 Thread GitBox


hemanthboyina merged pull request #3104:
URL: https://github.com/apache/hadoop/pull/3104


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org