[GitHub] [hadoop] functioner opened a new pull request #2821: HDFS-15925. The lack of packet-level mirrorError state synchronization in BlockReceiver can cause the HDFS client to hang
functioner opened a new pull request #2821: URL: https://github.com/apache/hadoop/pull/2821 I propose a fix for [HDFS-15925](https://issues.apache.org/jira/browse/HDFS-15925). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HADOOP-17531. --- Fix Version/s: 3.3.1 Hadoop Flags: Reviewed Release Note: Added a -useiterator option in distcp which uses listStatusIterator for building the listing. Primarily to reduce memory usage at client for building listing. Resolution: Fixed > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 10h 10m > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309855#comment-17309855 ] Ayush Saxena commented on HADOOP-17531: --- Committed to trunk and branch-3.3. Thanx [~ste...@apache.org], [~rajesh.balamohan] and [~weichiu]!!! > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 10h 10m > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572972=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572972 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 27/Mar/21 03:56 Start Date: 27/Mar/21 03:56 Worklog Time Spent: 10m Work Description: ayushtkn commented on pull request #2808: URL: https://github.com/apache/hadoop/pull/2808#issuecomment-808642850 Merged both the main and the addendum commit as part of this PR, Thanx Everyone -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572972) Time Spent: 10h 10m (was: 10h) > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 10h 10m > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #2808: HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2732).
ayushtkn commented on pull request #2808: URL: https://github.com/apache/hadoop/pull/2808#issuecomment-808642850 Merged both the main and the addendum commit as part of this PR, Thanx Everyone -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572971 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 27/Mar/21 03:55 Start Date: 27/Mar/21 03:55 Worklog Time Spent: 10m Work Description: ayushtkn merged pull request #2808: URL: https://github.com/apache/hadoop/pull/2808 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572971) Time Spent: 10h (was: 9h 50m) > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 10h > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #2808: HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2732).
ayushtkn merged pull request #2808: URL: https://github.com/apache/hadoop/pull/2808 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572960=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572960 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 27/Mar/21 00:54 Start Date: 27/Mar/21 00:54 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2808: URL: https://github.com/apache/hadoop/pull/2808#issuecomment-808607806 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 9 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 13m 43s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 8s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 14s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 2m 49s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 2m 52s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 2m 39s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 4m 21s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | | the patch passed | | +1 :green_heart: | compile | 17m 28s | | the patch passed | | +1 :green_heart: | javac | 17m 28s | | root generated 0 new + 1948 unchanged - 1 fixed = 1948 total (was 1949) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 45s | | root: The patch generated 0 new + 93 unchanged - 5 fixed = 93 total (was 98) | | +1 :green_heart: | mvnsite | 2m 51s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 36s | | the patch passed | | +1 :green_heart: | spotbugs | 5m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 58s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 14m 11s | | hadoop-distcp in the patch passed. | | +1 :green_heart: | unit | 1m 56s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 198m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2808/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2808 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux f54de7037819 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 782ed0cbb11bb21aa37931411cd386117381a1dd | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2808/2/testReport/ | | Max. process+thread count | 3143 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-distcp hadoop-tools/hadoop-aws U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2808/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2808: HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2732).
hadoop-yetus commented on pull request #2808: URL: https://github.com/apache/hadoop/pull/2808#issuecomment-808607806 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 9 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 13m 43s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 8s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 14s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 2m 49s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 2m 52s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 2m 39s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 4m 21s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | | the patch passed | | +1 :green_heart: | compile | 17m 28s | | the patch passed | | +1 :green_heart: | javac | 17m 28s | | root generated 0 new + 1948 unchanged - 1 fixed = 1948 total (was 1949) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 45s | | root: The patch generated 0 new + 93 unchanged - 5 fixed = 93 total (was 98) | | +1 :green_heart: | mvnsite | 2m 51s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 36s | | the patch passed | | +1 :green_heart: | spotbugs | 5m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 58s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 14m 11s | | hadoop-distcp in the patch passed. | | +1 :green_heart: | unit | 1m 56s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 198m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2808/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2808 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux f54de7037819 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 782ed0cbb11bb21aa37931411cd386117381a1dd | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2808/2/testReport/ | | Max. process+thread count | 3143 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-distcp hadoop-tools/hadoop-aws U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2808/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10128) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/HADOOP-10128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309811#comment-17309811 ] Sebb commented on HADOOP-10128: --- Please now remove https://dist.apache.org/repos/dist/release/hadoop/common/hadoop-3.2.1/ > Please delete old releases from mirroring system > > > Key: HADOOP-10128 > URL: https://issues.apache.org/jira/browse/HADOOP-10128 > Project: Hadoop Common > Issue Type: Bug > Environment: http://www.apache.org/dist/hadoop/common/ > http://www.apache.org/dist/hadoop/core/ >Reporter: Sebb >Priority: Major > > To reduce the load on the ASF mirrors, projects are required to delete old > releases. > Please can you remove all non-current releases? > i.e. anything except > 0.23.9 > 1.2.1 > 2.2.0 > Thanks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17604) Separate string metric from tag in hadoop metrics2
Fengnan Li created HADOOP-17604: --- Summary: Separate string metric from tag in hadoop metrics2 Key: HADOOP-17604 URL: https://issues.apache.org/jira/browse/HADOOP-17604 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Fengnan Li Assignee: Fengnan Li Attachments: Screen Shot 2021-03-26 at 2.50.08 PM.png Right now in hadoop metrics2, String metrics from method are categorized as tag (v.s. metrics as other number types), this caused later when reporting beans, it will add a prefix "tag." before the metric name. It will be cleaner if we have another child inherit MutableMetric for string (maybe MutableText?) thus the String metrics from method can get rid of the tag. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309755#comment-17309755 ] Hadoop QA commented on HADOOP-16870: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 17s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 53s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 33s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 14s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 35s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 41s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 38s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 9s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 31s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 21s{color} | {color:green}{color} | {color:green} the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 21s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 20s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 20s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} hadolint {color} | {color:green} 0m 2s{color} | {color:green}{color} | {color:green} There were no new hadolint issues. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 21s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 2s{color} | {color:green}{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 14s{color} | {color:green}{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 29s{color} | {color:green}{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 34s{color} | {color:green}{color} | {color:green} the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 {color} | | {color:green}+1{color} |
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572893=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572893 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 26/Mar/21 21:31 Start Date: 26/Mar/21 21:31 Worklog Time Spent: 10m Work Description: ayushtkn merged pull request #2820: URL: https://github.com/apache/hadoop/pull/2820 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572893) Time Spent: 9h 40m (was: 9.5h) > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 9h 40m > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #2820: HADOOP-17531.Addendum: DistCp: Reduce memory usage on copying huge directories.
ayushtkn merged pull request #2820: URL: https://github.com/apache/hadoop/pull/2820 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
hadoop-yetus commented on pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#issuecomment-808480490 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 14s | | trunk passed | | +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 3s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 378m 1s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 477m 54s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks | | | hadoop.hdfs.TestViewDistributedFileSystem | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.datanode.TestIncrementalBrVariations | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2748 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 379440357f60 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk /
[GitHub] [hadoop] hadoop-yetus commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
hadoop-yetus commented on pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#issuecomment-808468887 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 19s | | trunk passed | | +1 :green_heart: | compile | 1m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 26s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 1m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 58s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/10/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 535 unchanged - 0 fixed = 536 total (was 535) | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 383m 11s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 478m 50s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestLeaseRecovery | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572832=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572832 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 26/Mar/21 19:35 Start Date: 26/Mar/21 19:35 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2820: URL: https://github.com/apache/hadoop/pull/2820#issuecomment-808465462 +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572832) Time Spent: 9.5h (was: 9h 20m) > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 9.5h > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2820: HADOOP-17531.Addendum: DistCp: Reduce memory usage on copying huge directories.
steveloughran commented on pull request #2820: URL: https://github.com/apache/hadoop/pull/2820#issuecomment-808465462 +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mbsharp closed pull request #2789: YARN-10493: RunC container repository v2
mbsharp closed pull request #2789: URL: https://github.com/apache/hadoop/pull/2789 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mbsharp commented on pull request #2789: YARN-10493: RunC container repository v2
mbsharp commented on pull request #2789: URL: https://github.com/apache/hadoop/pull/2789#issuecomment-808464179 Refactoring to support multiple meta namespaces as discussed in the Jira. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309637#comment-17309637 ] Hadoop QA commented on HADOOP-16870: (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/174/console in case of problems. > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309628#comment-17309628 ] Akira Ajisaka commented on HADOOP-16870: The patch looks good to me. +1 pending Jenkins. Updated the status to "Patch Available" to run the precommit job. > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reopened HADOOP-16870: > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16870: --- Status: Patch Available (was: Reopened) > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309622#comment-17309622 ] Akira Ajisaka commented on HADOOP-16870: As I commented in https://issues.apache.org/jira/browse/YARN-10501?focusedCommentId=17309620=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17309620 Jenkinsfile is not supported in the precommit jobs which are kicked from JIRA. Therefore we cannot use multiple configuration for each branch (for example, enable spotbugs for trunk, and enable findbugs for branch-2.10). > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309597#comment-17309597 ] Eric Badger commented on HADOOP-16870: -- bq. After the backport, Spotbugs is run with only JDK 1.8 in the hadoop-multibranch job, so I think we don't have to remove the flag. Now JDK 1.7 is used in only compile check, and findbugs is not run with JDK 1.7. It is configured in https://github.com/apache/hadoop/blob/branch-2.10/dev-support/Jenkinsfile#L162 It looks like this was committed 15 days ago, but we have precommit builds failing in YARN-10501 as recently as yesterday with the same findbugs error. So I don't think the problem is fixed. Either we need to remove the flag for 1.7 builds or we need to figure out how to support it > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #2818: HDFS-15922. Use memcpy for copying non-null terminated string in jni_helper.c
goiri merged pull request #2818: URL: https://github.com/apache/hadoop/pull/2818 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309532#comment-17309532 ] Ahmed Hussein commented on HADOOP-16870: Oh I see. I applied diff on branch-2.10 [^HADOOP-16870.branch-2.10.001.patch] . Hopefully, it is going to provide some help. > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-16870: --- Attachment: HADOOP-16870.branch-2.10.001.patch > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: HADOOP-16870.branch-2.10.001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572728=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572728 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 26/Mar/21 16:12 Start Date: 26/Mar/21 16:12 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2820: URL: https://github.com/apache/hadoop/pull/2820#issuecomment-808340104 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 14m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 38s | | trunk passed | | +1 :green_heart: | compile | 20m 51s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 11s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 11s | | the patch passed | | +1 :green_heart: | compile | 18m 16s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 4s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 19s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 190m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2820/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2820 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5bd5aac2dfdf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8e4f082739da8c553feef83ab2e0b254f7fc2558 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2820/1/testReport/ | | Max. process+thread count | 1260 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2820/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2820: HADOOP-17531.Addendum: DistCp: Reduce memory usage on copying huge directories.
hadoop-yetus commented on pull request #2820: URL: https://github.com/apache/hadoop/pull/2820#issuecomment-808340104 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 14m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 38s | | trunk passed | | +1 :green_heart: | compile | 20m 51s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 11s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 11s | | the patch passed | | +1 :green_heart: | compile | 18m 16s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 4s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 19s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 190m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2820/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2820 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5bd5aac2dfdf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8e4f082739da8c553feef83ab2e0b254f7fc2558 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2820/1/testReport/ | | Max. process+thread count | 1260 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2820/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[jira] [Updated] (HADOOP-17582) Replace GitHub App Token with GitHub OAuth token
[ https://issues.apache.org/jira/browse/HADOOP-17582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17582: --- Fix Version/s: 3.1.5 Backported to branch-3.1. > Replace GitHub App Token with GitHub OAuth token > > > Key: HADOOP-17582 > URL: https://issues.apache.org/jira/browse/HADOOP-17582 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > GitHub App Token expires within 1 hour, so Yetus fails to write GitHub > comments in most cases. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin
[ https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16870: --- Fix Version/s: 3.1.5 Backported to branch-3.1. > Use spotbugs-maven-plugin instead of findbugs-maven-plugin > -- > > Key: HADOOP-16870 > URL: https://issues.apache.org/jira/browse/HADOOP-16870 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 5h 50m > Remaining Estimate: 0h > > findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17570) Apply YETUS-1102 to re-enable GitHub comments
[ https://issues.apache.org/jira/browse/HADOOP-17570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17570: --- Fix Version/s: 3.1.5 Backported to branch-3.1. > Apply YETUS-1102 to re-enable GitHub comments > - > > Key: HADOOP-17570 > URL: https://issues.apache.org/jira/browse/HADOOP-17570 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 2h 50m > Remaining Estimate: 0h > > Yetus 0.13.0 enabled updating GitHub status instead of commenting the report, > however, the report comments are still useful for some cases. Let's apply > YETUS-1102 to re-enable the comments. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16061) Update Apache Yetus to 0.10.0
[ https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16061: --- Fix Version/s: 3.1.5 Backported to branch-3.1. > Update Apache Yetus to 0.10.0 > - > > Key: HADOOP-16061 > URL: https://issues.apache.org/jira/browse/HADOOP-16061 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.5 > > > Yetus 0.10.0 is out. Let's upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16748) Migrate to Python 3 and upgrade Yetus to 0.13.0
[ https://issues.apache.org/jira/browse/HADOOP-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16748: --- Fix Version/s: 3.1.5 Backported to branch-3.1. > Migrate to Python 3 and upgrade Yetus to 0.13.0 > --- > > Key: HADOOP-16748 > URL: https://issues.apache.org/jira/browse/HADOOP-16748 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 8h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17133) Implement HttpServer2 metrics
[ https://issues.apache.org/jira/browse/HADOOP-17133?focusedWorklogId=572726=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572726 ] ASF GitHub Bot logged work on HADOOP-17133: --- Author: ASF GitHub Bot Created on: 26/Mar/21 16:03 Start Date: 26/Mar/21 16:03 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2145: URL: https://github.com/apache/hadoop/pull/2145#issuecomment-808333911 Thank you @Jing9! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572726) Time Spent: 1h 50m (was: 1h 40m) > Implement HttpServer2 metrics > - > > Key: HADOOP-17133 > URL: https://issues.apache.org/jira/browse/HADOOP-17133 > Project: Hadoop Common > Issue Type: Improvement > Components: httpfs, kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > I'd like to collect metrics (number of connections, average response time, > etc...) from HttpFS and KMS but there are no metrics for HttpServer2. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack commented on pull request #2693: Hadoop 16524 - resubmission following some unit test fixes
saintstack commented on pull request #2693: URL: https://github.com/apache/hadoop/pull/2693#issuecomment-808334369 Unrelated failure: ``` [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile (default-testCompile) on project hadoop-yarn-common: Compilation failure: Compilation failure: [ERROR] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2693/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java:[44] error: cannot find symbol [ERROR] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2693/src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java:[480,40] error: cannot find symbol ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2145: HADOOP-17133. Implement HttpServer2 metrics
aajisaka commented on pull request #2145: URL: https://github.com/apache/hadoop/pull/2145#issuecomment-808333911 Thank you @Jing9! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16054) Update Dockerfile to use Bionic
[ https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16054: --- Fix Version/s: 3.1.5 Backported to branch-3.1. > Update Dockerfile to use Bionic > --- > > Key: HADOOP-16054 > URL: https://issues.apache.org/jira/browse/HADOOP-16054 > Project: Hadoop Common > Issue Type: Improvement > Components: build, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309433#comment-17309433 ] Yongjun Zhang commented on HADOOP-17338: For the record, thanks [~ste...@apache.org] for merging the 2.10.x on 09/Feb/21 https://github.com/apache/hadoop/pull/2692 . Our platform has been free of these errors with the fix for some time. > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 2.10.2 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 5h 20m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) >
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572643 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 26/Mar/21 13:01 Start Date: 26/Mar/21 13:01 Worklog Time Spent: 10m Work Description: ayushtkn opened a new pull request #2820: URL: https://github.com/apache/hadoop/pull/2820 Addendum Patch https://issues.apache.org/jira/browse/HADOOP-17531 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572643) Time Spent: 9h 10m (was: 9h) > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 9h 10m > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn opened a new pull request #2820: HADOOP-17531.Addendum: DistCp: Reduce memory usage on copying huge directories.
ayushtkn opened a new pull request #2820: URL: https://github.com/apache/hadoop/pull/2820 Addendum Patch https://issues.apache.org/jira/browse/HADOOP-17531 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=572640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572640 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 26/Mar/21 12:43 Start Date: 26/Mar/21 12:43 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-808185819 I'm going to say the failures are related as its in the auditor code. interesting that you saw and not me. Will look at next week -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572640) Time Spent: 12h 20m (was: 12h 10m) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 12h 20m > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
steveloughran commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-808185819 I'm going to say the failures are related as its in the auditor code. interesting that you saw and not me. Will look at next week -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories
[ https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=572639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572639 ] ASF GitHub Bot logged work on HADOOP-17531: --- Author: ASF GitHub Bot Created on: 26/Mar/21 12:41 Start Date: 26/Mar/21 12:41 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2808: URL: https://github.com/apache/hadoop/pull/2808#issuecomment-808184868 @ayushtkn have you got a PR for trunk for the change @aajisaka asked for? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 572639) Time Spent: 9h (was: 8h 50m) > DistCp: Reduce memory usage on copying huge directories > --- > > Key: HADOOP-17531 > URL: https://issues.apache.org/jira/browse/HADOOP-17531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log > > Time Spent: 9h > Remaining Estimate: 0h > > Presently distCp, uses the producer-consumer kind of setup while building the > listing, the input queue and output queue are both unbounded, thus the > listStatus grows quite huge. > Rel Code Part : > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635 > This goes on bredth-first traversal kind of stuff(uses queue instead of > earlier stack), so if you have files at lower depth, it will like open up the > entire tree and the start processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2808: HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2732).
steveloughran commented on pull request #2808: URL: https://github.com/apache/hadoop/pull/2808#issuecomment-808184868 @ayushtkn have you got a PR for trunk for the change @aajisaka asked for? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17536) Suport for customer provided encrption key
[ https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=572636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572636 ] ASF GitHub Bot logged work on HADOOP-17536: --- Author: ASF GitHub Bot Created on: 26/Mar/21 12:31 Start Date: 26/Mar/21 12:31 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#issuecomment-808178947 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 6s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 0s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 18s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 1s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 94m 50s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2707 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 35ce95c5a2ad 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 88448802be00bbeef8004289c1bc515c7327cada | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key
hadoop-yetus commented on pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#issuecomment-808178947 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 6s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 0s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 18s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 1s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 94m 50s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2707 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 35ce95c5a2ad 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 88448802be00bbeef8004289c1bc515c7327cada | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/7/testReport/ | | Max. process+thread count | 702 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/7/console | | versions | git=2.25.1 maven=3.6.3
[GitHub] [hadoop] tomscut commented on a change in pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
tomscut commented on a change in pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#discussion_r602202963 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java ## @@ -201,8 +206,16 @@ */ private final boolean useDfsNetworkTopology; + private static final String IP_PORT_SEPARATOR = ":"; + @Nullable private final SlowPeerTracker slowPeerTracker; + private static Set slowPeers = Sets.newConcurrentHashSet(); Review comment: Sorry, I didn't see your reply just now. I will fix it soon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17536) Suport for customer provided encrption key
[ https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=572614=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572614 ] ASF GitHub Bot logged work on HADOOP-17536: --- Author: ASF GitHub Bot Created on: 26/Mar/21 11:18 Start Date: 26/Mar/21 11:18 Worklog Time Spent: 10m Work Description: vinaysbadami commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r602013519 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -111,19 +132,38 @@ private AbfsClient(final URL baseUrl, final SharedKeyCredentials sharedKeyCreden public AbfsClient(final URL baseUrl, final SharedKeyCredentials sharedKeyCredentials, final AbfsConfiguration abfsConfiguration, final AccessTokenProvider tokenProvider, -final AbfsClientContext abfsClientContext) { +final AbfsClientContext abfsClientContext) + throws IOException { this(baseUrl, sharedKeyCredentials, abfsConfiguration, abfsClientContext); this.tokenProvider = tokenProvider; } public AbfsClient(final URL baseUrl, final SharedKeyCredentials sharedKeyCredentials, final AbfsConfiguration abfsConfiguration, final SASTokenProvider sasTokenProvider, -final AbfsClientContext abfsClientContext) { +final AbfsClientContext abfsClientContext) + throws IOException { this(baseUrl, sharedKeyCredentials, abfsConfiguration, abfsClientContext); this.sasTokenProvider = sasTokenProvider; } + private byte[] getSHA256Hash(String key) throws IOException { +try { + final MessageDigest digester = MessageDigest.getInstance("SHA-256"); + return digester.digest(key.getBytes(StandardCharsets.UTF_8)); +} catch (NoSuchAlgorithmException e) { + throw new IOException(e); +} + } + + private String getBase64EncodedString(String key) { Review comment: merge these into one function ## File path: hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh ## @@ -31,17 +31,17 @@ begin combination=HNS-OAuth properties=("fs.azure.abfs.account.name" "fs.azure.test.namespace.enabled" "fs.azure.account.auth.type") -values=("{account name}.dfs.core.windows.net" "true" "OAuth") +values=("abfsitgen2.dfs.core.windows.net" "true" "OAuth") Review comment: needed? ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestCustomerProvidedKey.java ## @@ -0,0 +1,741 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.nio.CharBuffer; +import java.nio.charset.CharacterCodingException; +import java.nio.charset.Charset; +import java.nio.charset.CharsetEncoder; +import java.nio.charset.StandardCharsets; +import java.util.EnumSet; +import java.util.Hashtable; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Random; + +import org.assertj.core.api.Assertions; +import org.junit.Assume; +import org.junit.Test; + +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.contracts.services.AppendRequestParameters; +import org.apache.hadoop.fs.azurebfs.contracts.services.AppendRequestParameters.Mode; +import org.apache.hadoop.fs.azurebfs.services.AuthType; +import org.apache.hadoop.fs.azurebfs.services.AbfsAclHelper; +import
[GitHub] [hadoop] tomscut commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
tomscut commented on pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#issuecomment-808133427 Hi @tasanuma , those failed unit tests are unrelated to the change, and they work fine locally. Please take a look at the new commit, thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vinaysbadami commented on a change in pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key
vinaysbadami commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r602013519 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -111,19 +132,38 @@ private AbfsClient(final URL baseUrl, final SharedKeyCredentials sharedKeyCreden public AbfsClient(final URL baseUrl, final SharedKeyCredentials sharedKeyCredentials, final AbfsConfiguration abfsConfiguration, final AccessTokenProvider tokenProvider, -final AbfsClientContext abfsClientContext) { +final AbfsClientContext abfsClientContext) + throws IOException { this(baseUrl, sharedKeyCredentials, abfsConfiguration, abfsClientContext); this.tokenProvider = tokenProvider; } public AbfsClient(final URL baseUrl, final SharedKeyCredentials sharedKeyCredentials, final AbfsConfiguration abfsConfiguration, final SASTokenProvider sasTokenProvider, -final AbfsClientContext abfsClientContext) { +final AbfsClientContext abfsClientContext) + throws IOException { this(baseUrl, sharedKeyCredentials, abfsConfiguration, abfsClientContext); this.sasTokenProvider = sasTokenProvider; } + private byte[] getSHA256Hash(String key) throws IOException { +try { + final MessageDigest digester = MessageDigest.getInstance("SHA-256"); + return digester.digest(key.getBytes(StandardCharsets.UTF_8)); +} catch (NoSuchAlgorithmException e) { + throw new IOException(e); +} + } + + private String getBase64EncodedString(String key) { Review comment: merge these into one function ## File path: hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh ## @@ -31,17 +31,17 @@ begin combination=HNS-OAuth properties=("fs.azure.abfs.account.name" "fs.azure.test.namespace.enabled" "fs.azure.account.auth.type") -values=("{account name}.dfs.core.windows.net" "true" "OAuth") +values=("abfsitgen2.dfs.core.windows.net" "true" "OAuth") Review comment: needed? ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestCustomerProvidedKey.java ## @@ -0,0 +1,741 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.nio.CharBuffer; +import java.nio.charset.CharacterCodingException; +import java.nio.charset.Charset; +import java.nio.charset.CharsetEncoder; +import java.nio.charset.StandardCharsets; +import java.util.EnumSet; +import java.util.Hashtable; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Random; + +import org.assertj.core.api.Assertions; +import org.junit.Assume; +import org.junit.Test; + +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.test.LambdaTestUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.contracts.services.AppendRequestParameters; +import org.apache.hadoop.fs.azurebfs.contracts.services.AppendRequestParameters.Mode; +import org.apache.hadoop.fs.azurebfs.services.AuthType; +import org.apache.hadoop.fs.azurebfs.services.AbfsAclHelper; +import org.apache.hadoop.fs.azurebfs.services.AbfsClient; +import org.apache.hadoop.fs.azurebfs.services.AbfsHttpHeader; +import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation; +import org.apache.hadoop.fs.azurebfs.utils.Base64; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
[GitHub] [hadoop] hadoop-yetus commented on pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
hadoop-yetus commented on pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#issuecomment-808116576 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 44s | | trunk passed | | +1 :green_heart: | compile | 5m 49s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 5m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 3s | | trunk passed | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 14s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | | the patch passed | | +1 :green_heart: | compile | 5m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 23s | | the patch passed | | +1 :green_heart: | compile | 4m 52s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 52s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 10s | | hadoop-hdfs-project: The patch generated 0 new + 498 unchanged - 6 fixed = 498 total (was 504) | | +1 :green_heart: | mvnsite | 1m 48s | | the patch passed | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 3m 45s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/9/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html) | hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 19m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 397m 5s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 23m 22s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 566m 23s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Possible null pointer dereference of r in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.truncate(String, long, String, String, long) Dereferenced at FSNamesystem.java:r in org.apache.hadoop.hdfs.server.namenode.FSNamesystem.truncate(String, long, String, String, long) Dereferenced at FSNamesystem.java:[line 2325] | | Failed junit tests | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestLeaseRecovery | | |
[GitHub] [hadoop] linyiqun commented on a change in pull request #2737: HDFS-15869. Network issue while FSEditLogAsync is executing RpcEdit.logSyncNotify can cause the namenode to hang
linyiqun commented on a change in pull request #2737: URL: https://github.com/apache/hadoop/pull/2737#discussion_r602183347 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogAsync.java ## @@ -63,6 +68,9 @@ DFS_NAMENODE_EDITS_ASYNC_LOGGING_PENDING_QUEUE_SIZE_DEFAULT); editPendingQ = new ArrayBlockingQueue<>(editPendingQSize); + +// the thread pool size should be configurable later, and justified with a rationale +logSyncNotifyExecutor = Executors.newFixedThreadPool(10); Review comment: @functioner , we could make 10 as the default pool size. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a change in pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
tasanuma commented on a change in pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#discussion_r602173949 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java ## @@ -201,8 +206,16 @@ */ private final boolean useDfsNetworkTopology; + private static final String IP_PORT_SEPARATOR = ":"; + @Nullable private final SlowPeerTracker slowPeerTracker; + private static Set slowPeers = Sets.newConcurrentHashSet(); Review comment: We might as well change this variable name to fix a checkstyle warning. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
tasanuma commented on pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#issuecomment-808102983 @tomscut Looks good to me, except for the checkstyle issues. https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/9/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt Could you fix them? (I'm sorry that I should have mentioned it before.) If you use IntelliJ, I recommend CheckStyle-IDEA plugin. The configuration file is `hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks
hadoop-yetus commented on pull request #2748: URL: https://github.com/apache/hadoop/pull/2748#issuecomment-808086446 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 2m 17s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 42s | | trunk passed | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 43s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 57s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/9/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 12 new + 535 unchanged - 0 fixed = 547 total (was 535) | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 395m 50s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 496m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.TestIncrementalBrVariations | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/9/artifact/out/Dockerfile | | GITHUB PR |
[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const
[ https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=572526=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-572526 ] ASF GitHub Bot logged work on HADOOP-17596: --- Author: ASF GitHub Bot Created on: 26/Mar/21 08:23 Start Date: 26/Mar/21 08:23 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-808031395 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 13m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 38s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 15s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 34s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 8s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 87m 8s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2795/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2795 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint | | uname | Linux 1a2a6de7b3dd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b8fc3459e10b21826844ba99302aa13e5559d71c | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2795/3/testReport/ | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2795: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const
hadoop-yetus commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-808031395 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 13m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 38s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 15s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 34s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 8s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 87m 8s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2795/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2795 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint | | uname | Linux 1a2a6de7b3dd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b8fc3459e10b21826844ba99302aa13e5559d71c | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2795/3/testReport/ | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2795/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub
[GitHub] [hadoop] zhengzhuobinzzb commented on pull request #2819: HDFS-15923. Authentication failed when rename accross sub clusters.
zhengzhuobinzzb commented on pull request #2819: URL: https://github.com/apache/hadoop/pull/2819#issuecomment-807972675 It is strange that i can pass test TestRouterFederationRename on my local computer -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org