[GitHub] [hadoop] Jing9 merged pull request #3425: HDFS-16224. testBalancerWithObserverWithFailedNode times out
Jing9 merged pull request #3425: URL: https://github.com/apache/hadoop/pull/3425 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?focusedWorklogId=652133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652133 ] ASF GitHub Bot logged work on HADOOP-17914: --- Author: ASF GitHub Bot Created on: 17/Sep/21 06:49 Start Date: 17/Sep/21 06:49 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921543772 Thanks @tasanuma @ferhui @ayushtkn for your review and merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652133) Time Spent: 1h 50m (was: 1h 40m) > Print RPC response length in the exception message > -- > > Key: HADOOP-17914 > URL: https://issues.apache.org/jira/browse/HADOOP-17914 > Project: Hadoop Common > Issue Type: Improvement >Reporter: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > To facilitate problem tracking, we can print RPC Response Length in the > exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HADOOP-17914. -- Fix Version/s: 3.4.0 Resolution: Fixed > Print RPC response length in the exception message > -- > > Key: HADOOP-17914 > URL: https://issues.apache.org/jira/browse/HADOOP-17914 > Project: Hadoop Common > Issue Type: Improvement >Reporter: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > To facilitate problem tracking, we can print RPC Response Length in the > exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3436: HADOOP-17914. Print RPC response length in the exception message
tomscut commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921543772 Thanks @tasanuma @ferhui @ayushtkn for your review and merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?focusedWorklogId=652131&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652131 ] ASF GitHub Bot logged work on HADOOP-17914: --- Author: ASF GitHub Bot Created on: 17/Sep/21 06:45 Start Date: 17/Sep/21 06:45 Worklog Time Spent: 10m Work Description: ferhui merged pull request #3436: URL: https://github.com/apache/hadoop/pull/3436 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652131) Time Spent: 1.5h (was: 1h 20m) > Print RPC response length in the exception message > -- > > Key: HADOOP-17914 > URL: https://issues.apache.org/jira/browse/HADOOP-17914 > Project: Hadoop Common > Issue Type: Improvement >Reporter: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > To facilitate problem tracking, we can print RPC Response Length in the > exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3436: HADOOP-17914. Print RPC response length in the exception message
ferhui commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921541962 @tomscut Thanks for contribution. @tasanuma @ayushtkn Thanks for review! Merged -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?focusedWorklogId=652132&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652132 ] ASF GitHub Bot logged work on HADOOP-17914: --- Author: ASF GitHub Bot Created on: 17/Sep/21 06:46 Start Date: 17/Sep/21 06:46 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921541962 @tomscut Thanks for contribution. @tasanuma @ayushtkn Thanks for review! Merged -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652132) Time Spent: 1h 40m (was: 1.5h) > Print RPC response length in the exception message > -- > > Key: HADOOP-17914 > URL: https://issues.apache.org/jira/browse/HADOOP-17914 > Project: Hadoop Common > Issue Type: Improvement >Reporter: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > To facilitate problem tracking, we can print RPC Response Length in the > exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui merged pull request #3436: HADOOP-17914. Print RPC response length in the exception message
ferhui merged pull request #3436: URL: https://github.com/apache/hadoop/pull/3436 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on a change in pull request #3430: YARN-10942. Move AbstractCSQueue fields to separate objects that are tracking usage
9uapaw commented on a change in pull request #3430: URL: https://github.com/apache/hadoop/pull/3430#discussion_r710790031 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueueUsageTracker.java ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage; + +public class AbstractCSQueueUsageTracker { + private final CSQueueMetrics metrics; + private volatile int numContainers; + + /** + * The timestamp of the last submitted application to this queue. + * Only applies to dynamic queues. + */ + private long lastSubmittedTimestamp; + + /** + * Tracks resource usage by label like used-resource / pending-resource. + */ + private volatile ResourceUsage queueUsage; + + private final QueueResourceQuotas queueResourceQuotas; + + public AbstractCSQueueUsageTracker(CSQueueMetrics metrics) { +this.metrics = metrics; +this.queueUsage = new ResourceUsage(); +this.queueResourceQuotas = new QueueResourceQuotas(); + } + + public int getNumContainers() { +return numContainers; + } + + public synchronized void increaseNumContainers() { +numContainers++; + } + + public synchronized void decreaseNumContainers() { Review comment: I think making these methods synchronized is not a good a good idea. This will make the whole object locked, which is unnecessary, because the queue already has its own fine-grained RWLock, which is used for synchronization. I think an all-in or nothing methodology is better here: either make the whole class thread-safe, or let the queue handle the synchronization. In my opinion, since its only used internally within the queue, the latter would be better. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adamantal closed pull request #3449: YARN-10936. Log typo corrected.
adamantal closed pull request #3449: URL: https://github.com/apache/hadoop/pull/3449 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17920) Refactor pkg-resolver installation
Gautham Banasandra created HADOOP-17920: --- Summary: Refactor pkg-resolver installation Key: HADOOP-17920 URL: https://issues.apache.org/jira/browse/HADOOP-17920 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 3.4.0 Environment: Ubuntu Focal, Centos 7, Centos 8, Debian 10. Reporter: Gautham Banasandra Assignee: Gautham Banasandra In the view of keeping the Dockerfiles free from installation logic, we need to move the code for installing pkg-resolver to a script file and only run this script file from Dockerfile. Right now, we're installing the dependencies of pkg-resolver directly in the Dockerfile itself. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?focusedWorklogId=652105&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652105 ] ASF GitHub Bot logged work on HADOOP-17914: --- Author: ASF GitHub Bot Created on: 17/Sep/21 05:15 Start Date: 17/Sep/21 05:15 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921503672 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 33s | | trunk passed | | +1 :green_heart: | compile | 23m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 22m 1s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 22m 1s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 59s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 1s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 217m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3436/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3436 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 220cde8938d5 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 08125eae237b106e0ff159abf5e9030b7a6d6b52 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3436/4/testReport/ | | Max. process+thread count | 2113 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3436/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3436: HADOOP-17914. Print RPC response length in the exception message
hadoop-yetus commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921503672 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 33s | | trunk passed | | +1 :green_heart: | compile | 23m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 22m 1s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 22m 1s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 59s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 1s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 217m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3436/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3436 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 220cde8938d5 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 08125eae237b106e0ff159abf5e9030b7a6d6b52 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3436/4/testReport/ | | Max. process+thread count | 2113 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3436/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --
[jira] [Created] (HADOOP-17919) Modify command line example in Hadoop Cluster Setup documentation
Rintaro Ikeda created HADOOP-17919: -- Summary: Modify command line example in Hadoop Cluster Setup documentation Key: HADOOP-17919 URL: https://issues.apache.org/jira/browse/HADOOP-17919 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 3.3.1, 3.4.0 Reporter: Rintaro Ikeda Assignee: Rintaro Ikeda About Hdoop cluster setup documentation ([https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html]) The option is specified in the following example, but HDFS command ignores it. {noformat} `[hdfs]$ $HADOOP_HOME/bin/hdfs namenode -format ` {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17918) memory field in SequenceFile.Sorter java int overflow
yinan zhan created HADOOP-17918: --- Summary: memory field in SequenceFile.Sorter java int overflow Key: HADOOP-17918 URL: https://issues.apache.org/jira/browse/HADOOP-17918 Project: Hadoop Common Issue Type: Bug Components: common Affects Versions: 3.3.1 Reporter: yinan zhan memory field in SequenceFile.Sorter, can cause run method in SequenceFile.Sorter.SortPass enter an endless loop. If you set the "io.sort.mb" attribute to a value greater than 2050, as a result of {code:java} this.memory = conf.getInt(CommonConfigurationKeys.IO_SORT_MB_KEY, CommonConfigurationKeys.SEQ_IO_SORT_MB_DEFAULT) * 1024 * 1024;{code} , memory field java int overflow, may become negative, so memoryLimit field in SequenceFile.Sorter.SortPass will be negative too, lead to run method in SequenceFile.Sorter.SortPass enter an endless loop. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shuzirra commented on a change in pull request #3420: YARN-10913. AbstractCSQueue: Group preemption methods and fields into a separate class
shuzirra commented on a change in pull request #3420: URL: https://github.com/apache/hadoop/pull/3420#discussion_r710671963 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueuePreemption.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +public class AbstractCSQueuePreemption { + private final boolean preemptionDisabled; + // Indicates if the in-queue preemption setting is ever disabled within the + // hierarchy of this queue. + private final boolean intraQueuePreemptionDisabledInHierarchy; + + public AbstractCSQueuePreemption( + CSQueue queue, + CapacitySchedulerContext csContext, + CapacitySchedulerConfiguration configuration) { +this.preemptionDisabled = isQueueHierarchyPreemptionDisabled(queue, csContext, configuration); +this.intraQueuePreemptionDisabledInHierarchy = +isIntraQueueHierarchyPreemptionDisabled(queue, csContext, configuration); + } + + /** + * The specified queue is cross-queue preemptable if system-wide cross-queue + * preemption is turned on unless any queue in the qPath hierarchy + * has explicitly turned cross-queue preemption off. + * NOTE: Cross-queue preemptability is inherited from a queue's parent. + * + * @param q queue to check preemption state + * @param csContext + * @param configuration capacity scheduler config + * @return true if queue has cross-queue preemption disabled, false otherwise + */ + private boolean isQueueHierarchyPreemptionDisabled(CSQueue q, Review comment: I agree that the two methods are very similar, I'd suggest to create that followup jira, which can optimize it even more, but currently we are just moving code blocks to separate classes, let's keep refactoring of the code to a minimum. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueuePreemption.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.conf.YarnConfiguration; + +public class AbstractCSQueuePreemption { Review comment: Class name is misleading as it is not an abstract class, please consider renaming it to CSQueuePreemtion or something else. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shuzirra commented on a change in pull request #3430: YARN-10942. Move AbstractCSQueue fields to separate objects that are tracking usage
shuzirra commented on a change in pull request #3430: URL: https://github.com/apache/hadoop/pull/3430#discussion_r710667362 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java ## @@ -2352,17 +2352,17 @@ private void updateQueuePreemptionMetrics(RMContainer rmc) { final long usedMillis = rmc.getFinishTime() - rmc.getCreationTime(); final long usedSeconds = usedMillis / DateUtils.MILLIS_PER_SECOND; Resource containerResource = rmc.getAllocatedResource(); -metrics.preemptContainer(); +usageTracker.getMetrics().preemptContainer(); long mbSeconds = (containerResource.getMemorySize() * usedMillis) / DateUtils.MILLIS_PER_SECOND; long vcSeconds = (containerResource.getVirtualCores() * usedMillis) / DateUtils.MILLIS_PER_SECOND; -metrics.updatePreemptedMemoryMBSeconds(mbSeconds); -metrics.updatePreemptedVcoreSeconds(vcSeconds); -metrics.updatePreemptedResources(containerResource); -metrics.updatePreemptedSecondsForCustomResources(containerResource, +usageTracker.getMetrics().updatePreemptedMemoryMBSeconds(mbSeconds); +usageTracker.getMetrics().updatePreemptedVcoreSeconds(vcSeconds); +usageTracker.getMetrics().updatePreemptedResources(containerResource); + usageTracker.getMetrics().updatePreemptedSecondsForCustomResources(containerResource, usedSeconds); -metrics.updatePreemptedForCustomResources(containerResource); + usageTracker.getMetrics().updatePreemptedForCustomResources(containerResource); Review comment: Wouldn't it be simpler if we'd introduce a local metrics = usageTracker.getMetrics(); variable? It would reduce the change, and also would be little bit faster (it doesn't matter much, but there is no need to call getMetrics this many times) ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueueUsageTracker.java ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage; + +public class AbstractCSQueueUsageTracker { Review comment: This class is not abstract, the name is quite misleading, I think CSQueueUsageTracker would be better, or something else, but this name is a bit confusing. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueueUsageTracker.java ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity; + +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueResourceQuotas; +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage; + +public class AbstractCSQueueUsageTracker { + private final CSQueueMetrics metrics; + private volatile int numContainers; + + /** + * The timestamp of the last submitted application to
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #3170: HDFS-16107.Split RPC configuration to isolate RPC.
jianghuazhu commented on a change in pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#discussion_r710656957 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java ## @@ -3192,23 +3198,47 @@ protected Server(String bindAddress, int port, if (queueSizePerHandler != -1) { this.maxQueueSize = handlerCount * queueSizePerHandler; } else { - this.maxQueueSize = handlerCount * conf.getInt( - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); + String handlerQueueSizeStr = conf.get(getQueueClassPrefix() + "." + Review comment: Thanks @tomscut for the suggestion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #3170: HDFS-16107.Split RPC configuration to isolate RPC.
jianghuazhu commented on a change in pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#discussion_r710654807 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java ## @@ -830,9 +830,15 @@ private String getQueueClassPrefix() { public synchronized void refreshCallQueue(Configuration conf) { // Create the next queue String prefix = getQueueClassPrefix(); -this.maxQueueSize = handlerCount * conf.getInt( -CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, -CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); +String handlerQueueSizeStr = conf.get(prefix + "." + Review comment: Thanks @tomscut for the comment. This is my neglect, I will update it later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?focusedWorklogId=652057&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652057 ] ASF GitHub Bot logged work on HADOOP-17914: --- Author: ASF GitHub Bot Created on: 17/Sep/21 01:35 Start Date: 17/Sep/21 01:35 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921380831 > Could you please push an empty commit to trigger CI again? > Could you please push an empty commit to trigger CI again? Ok, I'm going to push now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652057) Time Spent: 1h 10m (was: 1h) > Print RPC response length in the exception message > -- > > Key: HADOOP-17914 > URL: https://issues.apache.org/jira/browse/HADOOP-17914 > Project: Hadoop Common > Issue Type: Improvement >Reporter: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > To facilitate problem tracking, we can print RPC Response Length in the > exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3436: HADOOP-17914. Print RPC response length in the exception message
tomscut commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921380831 > Could you please push an empty commit to trigger CI again? > Could you please push an empty commit to trigger CI again? Ok, I'm going to push now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17914) Print RPC response length in the exception message
[ https://issues.apache.org/jira/browse/HADOOP-17914?focusedWorklogId=652056&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652056 ] ASF GitHub Bot logged work on HADOOP-17914: --- Author: ASF GitHub Bot Created on: 17/Sep/21 01:34 Start Date: 17/Sep/21 01:34 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921380541 Thanks @tasanuma @ferhui for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652056) Time Spent: 1h (was: 50m) > Print RPC response length in the exception message > -- > > Key: HADOOP-17914 > URL: https://issues.apache.org/jira/browse/HADOOP-17914 > Project: Hadoop Common > Issue Type: Improvement >Reporter: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > To facilitate problem tracking, we can print RPC Response Length in the > exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3436: HADOOP-17914. Print RPC response length in the exception message
tomscut commented on pull request #3436: URL: https://github.com/apache/hadoop/pull/3436#issuecomment-921380541 Thanks @tasanuma @ferhui for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416375#comment-17416375 ] Hadoop QA commented on HADOOP-16254: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 40s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} buf {color} | {color:blue} 0m 0s{color} | {color:blue}{color} | {color:blue} buf was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 13m 12s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 8s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 42s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 46s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 51s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 24m 23s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 3s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 17s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 39m 27s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 7m 46s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 46s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 45s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 23m 45s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/234/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color} | {color:red} root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 15 new + 308 unchanged - 15 fixed = 323 total (was 323) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 45s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 53s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 53s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/234/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color} | {color:red} root-jdk
[jira] [Work logged] (HADOOP-17902) Fix Hadoop build on Debian 10
[ https://issues.apache.org/jira/browse/HADOOP-17902?focusedWorklogId=652045&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652045 ] ASF GitHub Bot logged work on HADOOP-17902: --- Author: ASF GitHub Bot Created on: 17/Sep/21 00:49 Start Date: 17/Sep/21 00:49 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921364728 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 6m 32s | | Docker failed to build yetus/hadoop:ef5dbc7283a. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3408 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652045) Time Spent: 5h 10m (was: 5h) > Fix Hadoop build on Debian 10 > - > > Key: HADOOP-17902 > URL: https://issues.apache.org/jira/browse/HADOOP-17902 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Time Spent: 5h 10m > Remaining Estimate: 0h > > We're using *Debian testing* as one of the package sources to get the latest > packages. It seems to be broken at the moment. The CI fails to create the > build environment for the Debian 10 platform - > {code} > [2021-09-08T00:21:11.596Z] #13 [ 8/14] RUN apt-get -q update && apt-get > -q install -y --no-install-recommends python3 && apt-get -q install -y > --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get > clean && rm -rf /var/lib/apt/lists/* > ... > [2021-09-08T00:21:22.744Z] #13 11.28 Preparing to unpack > .../libc6_2.31-17_amd64.deb ... > [2021-09-08T00:21:23.260Z] #13 11.46 Checking for services that may need to > be restarted... > [2021-09-08T00:21:23.260Z] #13 11.48 Checking init scripts... > [2021-09-08T00:21:23.260Z] #13 11.50 Unpacking libc6:amd64 (2.31-17) over > (2.28-10) ... > [2021-09-08T00:21:26.290Z] #13 14.38 Setting up libc6:amd64 (2.31-17) ... > [2021-09-08T00:21:26.290Z] #13 14.42 /usr/bin/perl: error while loading > shared libraries: libcrypt.so.1: cannot open shared object file: No such file > or directory > [2021-09-08T00:21:26.290Z] #13 14.42 dpkg: error processing package > libc6:amd64 (--configure): > [2021-09-08T00:21:26.290Z] #13 14.42 installed libc6:amd64 package > post-installation script subprocess returned error exit status 127 > [2021-09-08T00:21:26.291Z] #13 14.43 Errors were encountered while processing: > [2021-09-08T00:21:26.291Z] #13 14.43 libc6:amd64 > [2021-09-08T00:21:26.291Z] #13 14.46 E: Sub-process /usr/bin/dpkg returned an > error code (1) > [2021-09-08T00:21:27.867Z] #13 ERROR: executor failed running [/bin/bash -o > pipefail -c apt-get -q update && apt-get -q install -y > --no-install-recommends python3 && apt-get -q install -y > --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get > clean && rm -rf /var/lib/apt/lists/*]: exit code: 100 > [2021-09-08T00:21:27.867Z] -- > [2021-09-08T00:21:27.867Z] > [ 8/14] RUN apt-get -q update && apt-get -q > install -y --no-install-recommends python3 && apt-get -q install -y > --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get > clean && rm -rf /var/lib/apt/lists/*: > [2021-09-08T00:21:27.867Z] -- > [2021-09-08T00:21:27.867Z] executor failed running [/bin/bash -o pipefail -c > apt-get -q update && apt-get -q install -y --no-install-recommends > python3 && apt-get -q install -y --no-install-recommends > $(pkg-resolver/resolve.py debian:10) && apt-get clean && rm -rf > /var/lib/apt/lists/*]: exit code: 100 > [2021-09-08T00:21:27.867Z] ERROR: Docker failed to build > yetus/hadoop:ef5dbc7283a. > [2021-09-08T00:21:27.867Z] > {code} > The above log lines are copied from - > h
[GitHub] [hadoop] hadoop-yetus commented on pull request #3408: HADOOP-17902. Fix Hadoop build on Debian 10
hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921364728 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 6m 32s | | Docker failed to build yetus/hadoop:ef5dbc7283a. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3408 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17902) Fix Hadoop build on Debian 10
[ https://issues.apache.org/jira/browse/HADOOP-17902?focusedWorklogId=652043&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652043 ] ASF GitHub Bot logged work on HADOOP-17902: --- Author: ASF GitHub Bot Created on: 17/Sep/21 00:42 Start Date: 17/Sep/21 00:42 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921362538 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652043) Time Spent: 5h (was: 4h 50m) > Fix Hadoop build on Debian 10 > - > > Key: HADOOP-17902 > URL: https://issues.apache.org/jira/browse/HADOOP-17902 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Time Spent: 5h > Remaining Estimate: 0h > > We're using *Debian testing* as one of the package sources to get the latest > packages. It seems to be broken at the moment. The CI fails to create the > build environment for the Debian 10 platform - > {code} > [2021-09-08T00:21:11.596Z] #13 [ 8/14] RUN apt-get -q update && apt-get > -q install -y --no-install-recommends python3 && apt-get -q install -y > --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get > clean && rm -rf /var/lib/apt/lists/* > ... > [2021-09-08T00:21:22.744Z] #13 11.28 Preparing to unpack > .../libc6_2.31-17_amd64.deb ... > [2021-09-08T00:21:23.260Z] #13 11.46 Checking for services that may need to > be restarted... > [2021-09-08T00:21:23.260Z] #13 11.48 Checking init scripts... > [2021-09-08T00:21:23.260Z] #13 11.50 Unpacking libc6:amd64 (2.31-17) over > (2.28-10) ... > [2021-09-08T00:21:26.290Z] #13 14.38 Setting up libc6:amd64 (2.31-17) ... > [2021-09-08T00:21:26.290Z] #13 14.42 /usr/bin/perl: error while loading > shared libraries: libcrypt.so.1: cannot open shared object file: No such file > or directory > [2021-09-08T00:21:26.290Z] #13 14.42 dpkg: error processing package > libc6:amd64 (--configure): > [2021-09-08T00:21:26.290Z] #13 14.42 installed libc6:amd64 package > post-installation script subprocess returned error exit status 127 > [2021-09-08T00:21:26.291Z] #13 14.43 Errors were encountered while processing: > [2021-09-08T00:21:26.291Z] #13 14.43 libc6:amd64 > [2021-09-08T00:21:26.291Z] #13 14.46 E: Sub-process /usr/bin/dpkg returned an > error code (1) > [2021-09-08T00:21:27.867Z] #13 ERROR: executor failed running [/bin/bash -o > pipefail -c apt-get -q update && apt-get -q install -y > --no-install-recommends python3 && apt-get -q install -y > --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get > clean && rm -rf /var/lib/apt/lists/*]: exit code: 100 > [2021-09-08T00:21:27.867Z] -- > [2021-09-08T00:21:27.867Z] > [ 8/14] RUN apt-get -q update && apt-get -q > install -y --no-install-recommends python3 && apt-get -q install -y > --no-install-recommends $(pkg-resolver/resolve.py debian:10) && apt-get > clean && rm -rf /var/lib/apt/lists/*: > [2021-09-08T00:21:27.867Z] -- > [2021-09-08T00:21:27.867Z] executor failed running [/bin/bash -o pipefail -c > apt-get -q update && apt-get -q install -y --no-install-recommends > python3 && apt-get -q install -y --no-install-recommends > $(pkg-resolver/resolve.py debian:10) && apt-get clean && rm -rf > /var/lib/apt/lists/*]: exit code: 100 > [2021-09-08T00:21:27.867Z] ERROR: Docker failed to build > yetus/hadoop:ef5dbc7283a. > [2021-09-08T00:21:27.867Z] > {code} > The above log lines are copied from - > https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3388/3/pipeline -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3408: HADOOP-17902. Fix Hadoop build on Debian 10
hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921362538 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17902) Fix Hadoop build on Debian 10
[ https://issues.apache.org/jira/browse/HADOOP-17902?focusedWorklogId=652042&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652042 ] ASF GitHub Bot logged work on HADOOP-17902: --- Author: ASF GitHub Bot Created on: 17/Sep/21 00:40 Start Date: 17/Sep/21 00:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921361827 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 15s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 40s | | trunk passed | | +1 :green_heart: | compile | 2m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 47m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 2m 32s | | the patch passed | | +1 :green_heart: | cc | 2m 32s | | the patch passed | | +1 :green_heart: | golang | 2m 32s | | the patch passed | | +1 :green_heart: | javac | 2m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 84m 42s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 183m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3408 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile cc mvnsite javac unit golang | | uname | Linux a71d82182a18 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f1f803de116b35d93fcb48fb3b1ab0c29d94a2e1 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/testReport/ | | Max. process+thread count | 520 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652042) Time Spent: 4h 50m (was: 4h 40m) > Fix Hadoop build on Debian 10 > - > > Key: HADOOP-17902 > URL: https://issues.apache.org/jira/browse/HADOOP-17902 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Time Spent: 4h 50m > Remaining Estimate: 0h > > We're using *Debian testing* as one of th
[GitHub] [hadoop] hadoop-yetus commented on pull request #3408: HADOOP-17902. Fix Hadoop build on Debian 10
hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921361827 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 15s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 40s | | trunk passed | | +1 :green_heart: | compile | 2m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 47m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 2m 32s | | the patch passed | | +1 :green_heart: | cc | 2m 32s | | the patch passed | | +1 :green_heart: | golang | 2m 32s | | the patch passed | | +1 :green_heart: | javac | 2m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 84m 42s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 183m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3408 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile cc mvnsite javac unit golang | | uname | Linux a71d82182a18 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f1f803de116b35d93fcb48fb3b1ab0c29d94a2e1 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/testReport/ | | Max. process+thread count | 520 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3170: HDFS-16107.Split RPC configuration to isolate RPC.
tomscut commented on a change in pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#discussion_r710602685 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java ## @@ -3192,23 +3198,47 @@ protected Server(String bindAddress, int port, if (queueSizePerHandler != -1) { this.maxQueueSize = handlerCount * queueSizePerHandler; } else { - this.maxQueueSize = handlerCount * conf.getInt( - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, - CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); + String handlerQueueSizeStr = conf.get(getQueueClassPrefix() + "." + Review comment: We can also use ```conf.getInt``` here and as well as in the next few places? ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java ## @@ -830,9 +830,15 @@ private String getQueueClassPrefix() { public synchronized void refreshCallQueue(Configuration conf) { // Create the next queue String prefix = getQueueClassPrefix(); -this.maxQueueSize = handlerCount * conf.getInt( -CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, -CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); +String handlerQueueSizeStr = conf.get(prefix + "." + Review comment: Would it be better to use ```conf.getInt``` here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus commented on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-921337032 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 19m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 27 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 22s | | trunk passed | | -1 :x: | compile | 20m 1s | [/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/37/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | +1 :green_heart: | compile | 24m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 20s | | trunk passed | | +1 :green_heart: | javadoc | 3m 11s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 36s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 25m 19s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 25m 47s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 47s | | the patch passed | | +1 :green_heart: | compile | 28m 25s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 28m 25s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/37/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 201 new + 1714 unchanged - 0 fixed = 1915 total (was 1714) | | +1 :green_heart: | compile | 24m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 24m 30s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/37/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1790 unchanged - 0 fixed = 1792 total (was 1790) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/37/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 29s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/37/artifact/out/results-checkstyle-root.txt) | root: The patch generated 24 new + 8 unchanged - 0 fixed = 32 total (was 8) | | +1 :green_heart: | mvnsite | 4m 16s | | the patch passed | | +1 :green_heart: | xml | 0m 15s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 6s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 37s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 32s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 25m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 30s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1
[jira] [Work logged] (HADOOP-17902) Fix Hadoop build on Debian 10
[ https://issues.apache.org/jira/browse/HADOOP-17902?focusedWorklogId=652009&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-652009 ] ASF GitHub Bot logged work on HADOOP-17902: --- Author: ASF GitHub Bot Created on: 16/Sep/21 21:36 Start Date: 16/Sep/21 21:36 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921272435 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 43m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 45s | | trunk passed | | +1 :green_heart: | compile | 2m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 58m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 29s | | the patch passed | | +1 :green_heart: | cc | 2m 29s | | the patch passed | | +1 :green_heart: | golang | 2m 29s | | the patch passed | | +1 :green_heart: | javac | 2m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 78m 2s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 206m 18s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3408 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile cc mvnsite javac unit golang | | uname | Linux 9de2b9889d53 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f1f803de116b35d93fcb48fb3b1ab0c29d94a2e1 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/testReport/ | | Max. process+thread count | 593 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 652009) Time Spent: 4h 40m (was: 4.5h) > Fix Hadoop build on Debian 10 > - > > Key: HADOOP-17902 > URL: https://issues.apache.org/jira/browse/HADOOP-17902 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > We're using *Debian testing* as one of the p
[GitHub] [hadoop] hadoop-yetus commented on pull request #3408: HADOOP-17902. Fix Hadoop build on Debian 10
hadoop-yetus commented on pull request #3408: URL: https://github.com/apache/hadoop/pull/3408#issuecomment-921272435 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 43m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 45s | | trunk passed | | +1 :green_heart: | compile | 2m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 58m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 29s | | the patch passed | | +1 :green_heart: | cc | 2m 29s | | the patch passed | | +1 :green_heart: | golang | 2m 29s | | the patch passed | | +1 :green_heart: | javac | 2m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 78m 2s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 206m 18s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3408 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile cc mvnsite javac unit golang | | uname | Linux 9de2b9889d53 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f1f803de116b35d93fcb48fb3b1ab0c29d94a2e1 | | Default Java | Red Hat, Inc.-1.8.0_302-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/testReport/ | | Max. process+thread count | 593 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3408/6/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3440: ABFS: Support for Encryption Context
hadoop-yetus commented on pull request #3440: URL: https://github.com/apache/hadoop/pull/3440#issuecomment-921180882 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 26s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 27s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 32s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-azure in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 32s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-azure in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 28s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-azure in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 28s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-azure in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 11 new + 9 unchanged - 0 fixed = 20 total (was 9) | | -1 :x: | mvnsite | 0m 29s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) | hadoop-azure in the patch failed. | | -1 :x: | javadoc | 0m 23s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | javadoc | 0m 23s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/3/artifact/out/results-jav
[GitHub] [hadoop] hadoop-yetus commented on pull request #3448: HDFS-11045. Make TestDirectoryScanner#testThrottling() not based on timing
hadoop-yetus commented on pull request #3448: URL: https://github.com/apache/hadoop/pull/3448#issuecomment-921108962 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 29s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 5s | | trunk passed | | +1 :green_heart: | shadedclient | 30m 41s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 51s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 23 unchanged - 0 fixed = 27 total (was 23) | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 30m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 226m 27s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 339m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3448 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux bdad6b7041db 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9c5c5bc2e63dc2b57d98df55675f7d7de0c3ac64 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/2/testReport/ | | Max. process+thread count | 3160 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651892&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651892 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 17:13 Start Date: 16/Sep/21 17:13 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#issuecomment-920497973 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 51s | | trunk passed | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | mvnsite | 0m 21s | | trunk passed | | +1 :green_heart: | javadoc | 0m 20s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | shadedclient | 21m 25s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 14s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | shadedclient | 22m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 15s | | hadoop-client-integration-tests in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 85m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3441/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3441 | | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck shelldocs compile javac javadoc mvninstall shadedclient xml | | uname | Linux 4a733f0f6333 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bf80325b20263c58cc79e7112594f6acb38528e6 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3441/2/testReport/ | | Max. process+thread count | 572 (vs. ulimit of 5500) | | modules | C: hadoop-client-modules/hadoop-client-integration-tests U: hadoop-client-modules/hadoop-client-integration-tests | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3441/2/console
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
hadoop-yetus removed a comment on pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#issuecomment-920497973 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 51s | | trunk passed | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | mvnsite | 0m 21s | | trunk passed | | +1 :green_heart: | javadoc | 0m 20s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | shadedclient | 21m 25s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 14s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | shadedclient | 22m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 15s | | hadoop-client-integration-tests in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 85m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3441/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3441 | | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck shelldocs compile javac javadoc mvninstall shadedclient xml | | uname | Linux 4a733f0f6333 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bf80325b20263c58cc79e7112594f6acb38528e6 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3441/2/testReport/ | | Max. process+thread count | 572 (vs. ulimit of 5500) | | modules | C: hadoop-client-modules/hadoop-client-integration-tests U: hadoop-client-modules/hadoop-client-integration-tests | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3441/2/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=651891&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651891 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 16/Sep/21 17:12 Start Date: 16/Sep/21 17:12 Worklog Time Spent: 10m Work Description: ArkenKiran edited a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-921081392 How to Test locally. (Instructions for linux) 1. Add the following maven dependency to hadoop-common/pom.xml ``` io.opentelemetry.javaagent opentelemetry-javaagent ${opentelemetry-instrumentation.version} all ``` 2. Build using the following command ``` mvn install -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true ``` 3. Extract the tar from the location `hadoop-dist/target/` (Ex tar -xvf hadoop-dist/target/hadoop-3.4.0-SNAPSHOT.tar.gz -C ~/) 4. cd ~/hadoop-3.4.0-SNAPSHOT/ 5. uncomment the following lines in hadoop-env.sh (location: etc/hadoop/hadoop-env.sh) ``` # export OPENTELEMETRY_JAVAAGENT_PATH="$(find $HADOOP_HOME/share/hadoop/common/lib/ -name opentelemetry-javaagent*)" # export HADOOP_TRACE_OPTS="-javaagent:$OPENTELEMETRY_JAVAAGENT_PATH -Dotel.traces.exporter=jaeger -Dotel.metrics.exporter=none" # export HDFS_NAMENODE_OPTS="$HDFS_NAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_NAMENODE" # export HDFS_DATANODE_OPTS="$HDFS_DATANODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_DATANODE" # export HDFS_SECONDARYNAMENODE_OPTS="$HDFS_SECONDARYNAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_SECONDARYNAMENODE" ``` 6. Make sure `share/hadoop/common/lib/ ` has the maven dependency (opentelemetry-javaagent-1.3.0-all.jar) that you have added in step 1. 7. Start Jaeger following the instructions https://www.jaegertracing.io/docs/1.26/getting-started/ 8. Setup Single Node cluster following the instructions https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation 9. You should be able to see the traces in the jaeger UI (http://localhost:16686/search) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651891) Time Spent: 3h 50m (was: 3h 40m) > Support OpenTelemetry > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics, tracing >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available, security > Attachments: HADOOP-15566-WIP.1.patch, HADOOP-15566.000.WIP.patch, > OpenTelemetry Support Scope Doc v2.pdf, OpenTracing Support Scope Doc.pdf, > Screen Shot 2018-06-29 at 11.59.16 AM.png, ss-trace-s3a.png > > Time Spent: 3h 50m > Remaining Estimate: 0h > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=651890&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651890 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 16/Sep/21 17:12 Start Date: 16/Sep/21 17:12 Worklog Time Spent: 10m Work Description: ArkenKiran commented on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-921081736 @minni31 please follow the above mentioned instructions to setup locally -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651890) Time Spent: 3h 40m (was: 3.5h) > Support OpenTelemetry > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics, tracing >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available, security > Attachments: HADOOP-15566-WIP.1.patch, HADOOP-15566.000.WIP.patch, > OpenTelemetry Support Scope Doc v2.pdf, OpenTracing Support Scope Doc.pdf, > Screen Shot 2018-06-29 at 11.59.16 AM.png, ss-trace-s3a.png > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ArkenKiran edited a comment on pull request #3445: HADOOP-15566 Opentelemtery changes using java agent
ArkenKiran edited a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-921081392 How to Test locally. (Instructions for linux) 1. Add the following maven dependency to hadoop-common/pom.xml ``` io.opentelemetry.javaagent opentelemetry-javaagent ${opentelemetry-instrumentation.version} all ``` 2. Build using the following command ``` mvn install -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true ``` 3. Extract the tar from the location `hadoop-dist/target/` (Ex tar -xvf hadoop-dist/target/hadoop-3.4.0-SNAPSHOT.tar.gz -C ~/) 4. cd ~/hadoop-3.4.0-SNAPSHOT/ 5. uncomment the following lines in hadoop-env.sh (location: etc/hadoop/hadoop-env.sh) ``` # export OPENTELEMETRY_JAVAAGENT_PATH="$(find $HADOOP_HOME/share/hadoop/common/lib/ -name opentelemetry-javaagent*)" # export HADOOP_TRACE_OPTS="-javaagent:$OPENTELEMETRY_JAVAAGENT_PATH -Dotel.traces.exporter=jaeger -Dotel.metrics.exporter=none" # export HDFS_NAMENODE_OPTS="$HDFS_NAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_NAMENODE" # export HDFS_DATANODE_OPTS="$HDFS_DATANODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_DATANODE" # export HDFS_SECONDARYNAMENODE_OPTS="$HDFS_SECONDARYNAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_SECONDARYNAMENODE" ``` 6. Make sure `share/hadoop/common/lib/ ` has the maven dependency (opentelemetry-javaagent-1.3.0-all.jar) that you have added in step 1. 7. Start Jaeger following the instructions https://www.jaegertracing.io/docs/1.26/getting-started/ 8. Setup Single Node cluster following the instructions https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation 9. You should be able to see the traces in the jaeger UI (http://localhost:16686/search) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=651889&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651889 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 16/Sep/21 17:11 Start Date: 16/Sep/21 17:11 Worklog Time Spent: 10m Work Description: ArkenKiran commented on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-921081392 How to Test locally. 1. Add the following maven dependency to hadoop-common/pom.xml ``` io.opentelemetry.javaagent opentelemetry-javaagent ${opentelemetry-instrumentation.version} all ``` 2. Build using the following command ``` mvn install -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true ``` 3. Extract the tar from the location `hadoop-dist/target/` (Ex tar -xvf hadoop-dist/target/hadoop-3.4.0-SNAPSHOT.tar.gz -C ~/) 4. cd ~/hadoop-3.4.0-SNAPSHOT/ 5. uncomment the following lines in hadoop-env.sh (location: etc/hadoop/hadoop-env.sh) ``` # export OPENTELEMETRY_JAVAAGENT_PATH="$(find $HADOOP_HOME/share/hadoop/common/lib/ -name opentelemetry-javaagent*)" # export HADOOP_TRACE_OPTS="-javaagent:$OPENTELEMETRY_JAVAAGENT_PATH -Dotel.traces.exporter=jaeger -Dotel.metrics.exporter=none" # export HDFS_NAMENODE_OPTS="$HDFS_NAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_NAMENODE" # export HDFS_DATANODE_OPTS="$HDFS_DATANODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_DATANODE" # export HDFS_SECONDARYNAMENODE_OPTS="$HDFS_SECONDARYNAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_SECONDARYNAMENODE" ``` 6. Make sure `share/hadoop/common/lib/ ` has the maven dependency (opentelemetry-javaagent-1.3.0-all.jar) that you have added in step 1. 7. Start Jaeger following the instructions https://www.jaegertracing.io/docs/1.26/getting-started/ 8. Setup Single Node cluster following the instructions https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation 9. You should be able to see the traces in the jaeger UI (http://localhost:16686/search) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651889) Time Spent: 3.5h (was: 3h 20m) > Support OpenTelemetry > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics, tracing >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available, security > Attachments: HADOOP-15566-WIP.1.patch, HADOOP-15566.000.WIP.patch, > OpenTelemetry Support Scope Doc v2.pdf, OpenTracing Support Scope Doc.pdf, > Screen Shot 2018-06-29 at 11.59.16 AM.png, ss-trace-s3a.png > > Time Spent: 3.5h > Remaining Estimate: 0h > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ArkenKiran commented on pull request #3445: HADOOP-15566 Opentelemtery changes using java agent
ArkenKiran commented on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-921081736 @minni31 please follow the above mentioned instructions to setup locally -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ArkenKiran commented on pull request #3445: HADOOP-15566 Opentelemtery changes using java agent
ArkenKiran commented on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-921081392 How to Test locally. 1. Add the following maven dependency to hadoop-common/pom.xml ``` io.opentelemetry.javaagent opentelemetry-javaagent ${opentelemetry-instrumentation.version} all ``` 2. Build using the following command ``` mvn install -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true ``` 3. Extract the tar from the location `hadoop-dist/target/` (Ex tar -xvf hadoop-dist/target/hadoop-3.4.0-SNAPSHOT.tar.gz -C ~/) 4. cd ~/hadoop-3.4.0-SNAPSHOT/ 5. uncomment the following lines in hadoop-env.sh (location: etc/hadoop/hadoop-env.sh) ``` # export OPENTELEMETRY_JAVAAGENT_PATH="$(find $HADOOP_HOME/share/hadoop/common/lib/ -name opentelemetry-javaagent*)" # export HADOOP_TRACE_OPTS="-javaagent:$OPENTELEMETRY_JAVAAGENT_PATH -Dotel.traces.exporter=jaeger -Dotel.metrics.exporter=none" # export HDFS_NAMENODE_OPTS="$HDFS_NAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_NAMENODE" # export HDFS_DATANODE_OPTS="$HDFS_DATANODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_DATANODE" # export HDFS_SECONDARYNAMENODE_OPTS="$HDFS_SECONDARYNAMENODE_OPTS $HADOOP_TRACE_OPTS -Dotel.resource.attributes=service.name=HDFS_SECONDARYNAMENODE" ``` 6. Make sure `share/hadoop/common/lib/ ` has the maven dependency (opentelemetry-javaagent-1.3.0-all.jar) that you have added in step 1. 7. Start Jaeger following the instructions https://www.jaegertracing.io/docs/1.26/getting-started/ 8. Setup Single Node cluster following the instructions https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation 9. You should be able to see the traces in the jaeger UI (http://localhost:16686/search) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651885&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651885 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 17:06 Start Date: 16/Sep/21 17:06 Worklog Time Spent: 10m Work Description: viirya commented on pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#issuecomment-921077732 Thank you @sunchao @ayushtkn @iwasakims ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651885) Time Spent: 17.5h (was: 17h 20m) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 17.5h > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
viirya commented on pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#issuecomment-921077732 Thank you @sunchao @ayushtkn @iwasakims ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651879&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651879 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:58 Start Date: 16/Sep/21 16:58 Worklog Time Spent: 10m Work Description: sunchao merged pull request #3441: URL: https://github.com/apache/hadoop/pull/3441 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651879) Time Spent: 17h 10m (was: 17h) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 17h 10m > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651880&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651880 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:58 Start Date: 16/Sep/21 16:58 Worklog Time Spent: 10m Work Description: sunchao commented on pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#issuecomment-921072616 Merged to trunk. Thanks @viirya ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651880) Time Spent: 17h 20m (was: 17h 10m) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 17h 20m > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
sunchao commented on pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#issuecomment-921072616 Merged to trunk. Thanks @viirya ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao merged pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
sunchao merged pull request #3441: URL: https://github.com/apache/hadoop/pull/3441 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651877&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651877 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:57 Start Date: 16/Sep/21 16:57 Worklog Time Spent: 10m Work Description: sunchao commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710305447 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: Yes it looks fine to me too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651877) Time Spent: 17h (was: 16h 50m) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 17h > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
sunchao commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710305447 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: Yes it looks fine to me too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17916) Fix compilation error of ITUseHadoopCodecs with -DskipShade
[ https://issues.apache.org/jira/browse/HADOOP-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-17916: -- Resolution: Duplicate Status: Resolved (was: Patch Available) closing this as duplicate of *addendum of* HADOOP-17891. > Fix compilation error of ITUseHadoopCodecs with -DskipShade > --- > > Key: HADOOP-17916 > URL: https://issues.apache.org/jira/browse/HADOOP-17916 > Project: Hadoop Common > Issue Type: Bug >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-client-integration-tests: Compilation > failure: Compilation failure: > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[34,28] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: package org.apache.hadoop.io > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,16] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,54] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[103,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[104,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17916) Fix compilation error of ITUseHadoopCodecs with -DskipShade
[ https://issues.apache.org/jira/browse/HADOOP-17916?focusedWorklogId=651868&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651868 ] ASF GitHub Bot logged work on HADOOP-17916: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:45 Start Date: 16/Sep/21 16:45 Worklog Time Spent: 10m Work Description: iwasakims closed pull request #3447: URL: https://github.com/apache/hadoop/pull/3447 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651868) Time Spent: 1h (was: 50m) > Fix compilation error of ITUseHadoopCodecs with -DskipShade > --- > > Key: HADOOP-17916 > URL: https://issues.apache.org/jira/browse/HADOOP-17916 > Project: Hadoop Common > Issue Type: Bug >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-client-integration-tests: Compilation > failure: Compilation failure: > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[34,28] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: package org.apache.hadoop.io > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,16] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,54] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[103,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[104,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims closed pull request #3447: HADOOP-17916. Fix compilation error of ITUseHadoopCodecs with -DskipS…
iwasakims closed pull request #3447: URL: https://github.com/apache/hadoop/pull/3447 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651866&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651866 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:42 Start Date: 16/Sep/21 16:42 Worklog Time Spent: 10m Work Description: iwasakims commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710287622 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: It worked as far as I tried on #3447, while I think it would be ok to have dedicated section for both jar and test-jar side-by-side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651866) Time Spent: 16h 50m (was: 16h 40m) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 16h 50m > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on a change in pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
iwasakims commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710287622 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: It worked as far as I tried on #3447, while I think it would be ok to have dedicated section for both jar and test-jar side-by-side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651864&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651864 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:39 Start Date: 16/Sep/21 16:39 Worklog Time Spent: 10m Work Description: iwasakims commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710287622 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: It worked as far as I tried on #3447. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651864) Time Spent: 16h 40m (was: 16.5h) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 16h 40m > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on a change in pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
iwasakims commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710287622 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: It worked as far as I tried on #3447. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17126) implement non-guava Precondition checkNotNull
[ https://issues.apache.org/jira/browse/HADOOP-17126?focusedWorklogId=651863&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651863 ] ASF GitHub Bot logged work on HADOOP-17126: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:34 Start Date: 16/Sep/21 16:34 Worklog Time Spent: 10m Work Description: amahussein commented on pull request #3050: URL: https://github.com/apache/hadoop/pull/3050#issuecomment-921055717 @steveloughran This PR was one of the steps to replace the guava dependency in common module. Can you please take a look? I remember you suggested in #2134 to change the package name. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651863) Time Spent: 2.5h (was: 2h 20m) > implement non-guava Precondition checkNotNull > - > > Key: HADOOP-17126 > URL: https://issues.apache.org/jira/browse/HADOOP-17126 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > As part In order to replace Guava Preconditions, we need to implement our own > versions of the API. > This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}. > +The plan is as follows+ > * create a new {{package org.apache.hadoop.util.unguava;}} > * {{create class Validate}} > * implement {{package org.apache.hadoop.util.unguava.Validate;}} with the > following interface > ** {{checkNotNull(final T obj)}} > ** {{checkNotNull(final T reference, final Object errorMessage)}} > ** {{checkNotNull(final T obj, final String message, final Object... > values)}} > ** {{checkNotNull(final T obj,final Supplier msgSupplier)}} > * {{guava.preconditions used String.lenientformat which suppressed > exceptions caused by string formatting of the exception message . So, in > order to avoid changing the behavior, the implementation catches Exceptions > triggered by building the message (IllegalFormat, InsufficientArg, > NullPointer..etc)}} > * {{After merging the new class, we can replace > guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull > * We need the change to go into trunk, 3.1, 3.2, and 3.3 > > Similar Jiras will be created to implement checkState, checkArgument, > checkIndex -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #3050: HADOOP-17126. implement non-guava Precondition checkNotNull
amahussein commented on pull request #3050: URL: https://github.com/apache/hadoop/pull/3050#issuecomment-921055717 @steveloughran This PR was one of the steps to replace the guava dependency in common module. Can you please take a look? I remember you suggested in #2134 to change the package name. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651858&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651858 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:27 Start Date: 16/Sep/21 16:27 Worklog Time Spent: 10m Work Description: viirya commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710278549 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: This is type test-jar. Can we merge with type jar (line 173)? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651858) Time Spent: 16.5h (was: 16h 20m) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 16.5h > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on a change in pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
viirya commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710278549 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: This is type test-jar. Can we merge with type jar (line 173)? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17913) Filter deps with release labels
[ https://issues.apache.org/jira/browse/HADOOP-17913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416214#comment-17416214 ] Íñigo Goiri commented on HADOOP-17913: -- Thanks [~gautham] for the improvement. Merged PR 3437 to trunk. > Filter deps with release labels > --- > > Key: HADOOP-17913 > URL: https://issues.apache.org/jira/browse/HADOOP-17913 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to add the ability to filter the dependencies listed in packages.json > file based on the specified release label. This is helpful for maintaining > dependencies across different releases for a platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17913) Filter deps with release labels
[ https://issues.apache.org/jira/browse/HADOOP-17913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri resolved HADOOP-17913. -- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Filter deps with release labels > --- > > Key: HADOOP-17913 > URL: https://issues.apache.org/jira/browse/HADOOP-17913 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to add the ability to filter the dependencies listed in packages.json > file based on the specified release label. This is helpful for maintaining > dependencies across different releases for a platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17913) Filter deps with release labels
[ https://issues.apache.org/jira/browse/HADOOP-17913?focusedWorklogId=651850&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651850 ] ASF GitHub Bot logged work on HADOOP-17913: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:19 Start Date: 16/Sep/21 16:19 Worklog Time Spent: 10m Work Description: goiri merged pull request #3437: URL: https://github.com/apache/hadoop/pull/3437 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651850) Time Spent: 0.5h (was: 20m) > Filter deps with release labels > --- > > Key: HADOOP-17913 > URL: https://issues.apache.org/jira/browse/HADOOP-17913 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Blocker > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to add the ability to filter the dependencies listed in packages.json > file based on the specified release label. This is helpful for maintaining > dependencies across different releases for a platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #3437: HADOOP-17913. Filter deps with release labels
goiri merged pull request #3437: URL: https://github.com/apache/hadoop/pull/3437 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17123) remove guava Preconditions from Hadoop-common module
[ https://issues.apache.org/jira/browse/HADOOP-17123?focusedWorklogId=651846&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651846 ] ASF GitHub Bot logged work on HADOOP-17123: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:17 Start Date: 16/Sep/21 16:17 Worklog Time Spent: 10m Work Description: amahussein commented on pull request #2134: URL: https://github.com/apache/hadoop/pull/2134#issuecomment-921043775 Thanks @steveloughran Some time ago, I created HADOOP-17126 "implement non-guava Precondition checkNotNull" to address your feedback suggesting to break the PR into two separate phases. Back then, I got some initial feedback but then the PR got lost and stayed pending since then. I will close this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651846) Time Spent: 40m (was: 0.5h) > remove guava Preconditions from Hadoop-common module > > > Key: HADOOP-17123 > URL: https://issues.apache.org/jira/browse/HADOOP-17123 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Replace guava Preconditions by internal implementations that rely on java8+ > APIs in the hadoop -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17123) remove guava Preconditions from Hadoop-common module
[ https://issues.apache.org/jira/browse/HADOOP-17123?focusedWorklogId=651847&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651847 ] ASF GitHub Bot logged work on HADOOP-17123: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:17 Start Date: 16/Sep/21 16:17 Worklog Time Spent: 10m Work Description: amahussein closed pull request #2134: URL: https://github.com/apache/hadoop/pull/2134 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651847) Time Spent: 50m (was: 40m) > remove guava Preconditions from Hadoop-common module > > > Key: HADOOP-17123 > URL: https://issues.apache.org/jira/browse/HADOOP-17123 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Replace guava Preconditions by internal implementations that rely on java8+ > APIs in the hadoop -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein closed pull request #2134: HADOOP-17123. remove guava Preconditions from Hadoop-common module
amahussein closed pull request #2134: URL: https://github.com/apache/hadoop/pull/2134 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2134: HADOOP-17123. remove guava Preconditions from Hadoop-common module
amahussein commented on pull request #2134: URL: https://github.com/apache/hadoop/pull/2134#issuecomment-921043775 Thanks @steveloughran Some time ago, I created HADOOP-17126 "implement non-guava Precondition checkNotNull" to address your feedback suggesting to break the PR into two separate phases. Back then, I got some initial feedback but then the PR got lost and stayed pending since then. I will close this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17916) Fix compilation error of ITUseHadoopCodecs with -DskipShade
[ https://issues.apache.org/jira/browse/HADOOP-17916?focusedWorklogId=651844&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651844 ] ASF GitHub Bot logged work on HADOOP-17916: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:10 Start Date: 16/Sep/21 16:10 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #3447: URL: https://github.com/apache/hadoop/pull/3447#issuecomment-921038209 @sunchao Oops. I did not notice the PR since it does not have dedicated JIRA issue. I will close this as duplicate of #3441. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651844) Time Spent: 50m (was: 40m) > Fix compilation error of ITUseHadoopCodecs with -DskipShade > --- > > Key: HADOOP-17916 > URL: https://issues.apache.org/jira/browse/HADOOP-17916 > Project: Hadoop Common > Issue Type: Bug >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-client-integration-tests: Compilation > failure: Compilation failure: > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[34,28] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: package org.apache.hadoop.io > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,16] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,54] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[103,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[104,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #3447: HADOOP-17916. Fix compilation error of ITUseHadoopCodecs with -DskipS…
iwasakims commented on pull request #3447: URL: https://github.com/apache/hadoop/pull/3447#issuecomment-921038209 @sunchao Oops. I did not notice the PR since it does not have dedicated JIRA issue. I will close this as duplicate of #3441. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17195) Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs
[ https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=651842&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651842 ] ASF GitHub Bot logged work on HADOOP-17195: --- Author: ASF GitHub Bot Created on: 16/Sep/21 16:07 Start Date: 16/Sep/21 16:07 Worklog Time Spent: 10m Work Description: mukund-thakur commented on a change in pull request #3446: URL: https://github.com/apache/hadoop/pull/3446#discussion_r710229393 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/store/TestDataBlocks.java ## @@ -0,0 +1,138 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.store; + +import java.io.IOException; +import java.util.Random; + +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.io.IOUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.test.LambdaTestUtils; + +import static org.apache.hadoop.fs.store.DataBlocks.DATA_BLOCKS_BUFFER_ARRAY; +import static org.apache.hadoop.fs.store.DataBlocks.DATA_BLOCKS_BUFFER_DISK; +import static org.apache.hadoop.fs.store.DataBlocks.DATA_BLOCKS_BYTEBUFFER; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +/** + * UTs to test {@link DataBlocks} functionalities. + */ +public class TestDataBlocks { + private final Configuration configuration = new Configuration(); + private static final int ONE_KB = 1024; + private static final Logger LOG = + LoggerFactory.getLogger(TestDataBlocks.class); + + /** + * Test to verify different DataBlocks factories, different operations. + */ + @Test + public void testDataBlocksFactory() throws Exception { +testCreateFactory(DATA_BLOCKS_BUFFER_DISK); +testCreateFactory(DATA_BLOCKS_BUFFER_ARRAY); +testCreateFactory(DATA_BLOCKS_BYTEBUFFER); + } + + /** + * Verify creation of a data block factory and it's operations. + * + * @param nameOfFactory Name of the DataBlock factory to be created. + * @throws IOException Throw IOE in case of failure while creating a block. + */ + public void testCreateFactory(String nameOfFactory) throws Exception { +LOG.info("Testing: {}", nameOfFactory); +DataBlocks.BlockFactory diskFactory = Review comment: nit: change name to blockFactory as it is not always disk ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/DataBlocks.java ## @@ -0,0 +1,1123 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.store; + +import java.io.BufferedOutputStream; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.Closeable; +import java.io.EOFException; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.io.FileUtils; +import org.apache.commons.io.IOUt
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #3446: HADOOP-17195. OutOfMemory error while uploading huge files to ABFS
mukund-thakur commented on a change in pull request #3446: URL: https://github.com/apache/hadoop/pull/3446#discussion_r710229393 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/store/TestDataBlocks.java ## @@ -0,0 +1,138 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.store; + +import java.io.IOException; +import java.util.Random; + +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.io.IOUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.test.LambdaTestUtils; + +import static org.apache.hadoop.fs.store.DataBlocks.DATA_BLOCKS_BUFFER_ARRAY; +import static org.apache.hadoop.fs.store.DataBlocks.DATA_BLOCKS_BUFFER_DISK; +import static org.apache.hadoop.fs.store.DataBlocks.DATA_BLOCKS_BYTEBUFFER; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +/** + * UTs to test {@link DataBlocks} functionalities. + */ +public class TestDataBlocks { + private final Configuration configuration = new Configuration(); + private static final int ONE_KB = 1024; + private static final Logger LOG = + LoggerFactory.getLogger(TestDataBlocks.class); + + /** + * Test to verify different DataBlocks factories, different operations. + */ + @Test + public void testDataBlocksFactory() throws Exception { +testCreateFactory(DATA_BLOCKS_BUFFER_DISK); +testCreateFactory(DATA_BLOCKS_BUFFER_ARRAY); +testCreateFactory(DATA_BLOCKS_BYTEBUFFER); + } + + /** + * Verify creation of a data block factory and it's operations. + * + * @param nameOfFactory Name of the DataBlock factory to be created. + * @throws IOException Throw IOE in case of failure while creating a block. + */ + public void testCreateFactory(String nameOfFactory) throws Exception { +LOG.info("Testing: {}", nameOfFactory); +DataBlocks.BlockFactory diskFactory = Review comment: nit: change name to blockFactory as it is not always disk ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/DataBlocks.java ## @@ -0,0 +1,1123 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.store; + +import java.io.BufferedOutputStream; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.Closeable; +import java.io.EOFException; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.io.FileUtils; +import org.apache.commons.io.IOUtils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.hadoop.fs.FSExceptionMessages; +import org.apache.hadoop.fs.LocalDirAllocator; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.util.DirectBufferPool; + +import static java.util.Objects.requireNonNull; +import static org.apache.hadoop.fs.CommonConfigurationKeys.HADOOP_TMP_DIR; +import static org.a
[GitHub] [hadoop] hadoop-yetus commented on pull request #3170: HDFS-16107.Split RPC configuration to isolate RPC.
hadoop-yetus commented on pull request #3170: URL: https://github.com/apache/hadoop/pull/3170#issuecomment-921026362 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 9s | | trunk passed | | +1 :green_heart: | compile | 23m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 36s | | trunk passed | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 30m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 55s | | the patch passed | | +1 :green_heart: | compile | 20m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 32s | | the patch passed | | +1 :green_heart: | compile | 18m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 7s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 34s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 30m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 13s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 210m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3170 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux f855c563b991 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8e4239bdd19cf77e7f1dcd3c3f67ac7692f3225e | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/7/testReport/ | | Max. process+thread count | 1251 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3170/7/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please conta
[jira] [Work logged] (HADOOP-17873) ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException
[ https://issues.apache.org/jira/browse/HADOOP-17873?focusedWorklogId=651826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651826 ] ASF GitHub Bot logged work on HADOOP-17873: --- Author: ASF GitHub Bot Created on: 16/Sep/21 15:47 Start Date: 16/Sep/21 15:47 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3341: URL: https://github.com/apache/hadoop/pull/3341#issuecomment-921020263 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 28m 17s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 28m 36s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | hadoop-tools/hadoop-azure: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 28m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 1s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 101m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3341/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux d7f5716b8e52 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5f5b3325fbbdfd33a6f058c1a521122a158b8d6b | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3341/8/testReport/ | | Max. process+thread coun
[GitHub] [hadoop] hadoop-yetus commented on pull request #3341: HADOOP-17873. ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException
hadoop-yetus commented on pull request #3341: URL: https://github.com/apache/hadoop/pull/3341#issuecomment-921020263 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 28m 17s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 28m 36s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | hadoop-tools/hadoop-azure: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 28m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 1s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 101m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3341/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux d7f5716b8e52 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5f5b3325fbbdfd33a6f058c1a521122a158b8d6b | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3341/8/testReport/ | | Max. process+thread count | 693 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3341/8/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message fr
[jira] [Work logged] (HADOOP-17123) remove guava Preconditions from Hadoop-common module
[ https://issues.apache.org/jira/browse/HADOOP-17123?focusedWorklogId=651795&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651795 ] ASF GitHub Bot logged work on HADOOP-17123: --- Author: ASF GitHub Bot Created on: 16/Sep/21 15:11 Start Date: 16/Sep/21 15:11 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2134: URL: https://github.com/apache/hadoop/pull/2134#issuecomment-920990335 revisiting this...we should get this in so it can be backported and we can keep guava out of future patches. is the name 'noguava' the right one? 1. should we try and be clever and use a name a bit like guava but different (Remember: guava is derived from java) 2. or just stick in some package under util *and declare private*, e.g `o.a.hadoop.utils.internal`. We don't want anything external using these -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651795) Time Spent: 20m (was: 10m) > remove guava Preconditions from Hadoop-common module > > > Key: HADOOP-17123 > URL: https://issues.apache.org/jira/browse/HADOOP-17123 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Replace guava Preconditions by internal implementations that rely on java8+ > APIs in the hadoop -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #3441: HADOOP-17891. Fix compilation error under skipShade (ADDENDUM)
sunchao commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710217368 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: nit: actually we may merge this with line 173 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17916) Fix compilation error of ITUseHadoopCodecs with -DskipShade
[ https://issues.apache.org/jira/browse/HADOOP-17916?focusedWorklogId=651800&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651800 ] ASF GitHub Bot logged work on HADOOP-17916: --- Author: ASF GitHub Bot Created on: 16/Sep/21 15:14 Start Date: 16/Sep/21 15:14 Worklog Time Spent: 10m Work Description: sunchao commented on pull request #3447: URL: https://github.com/apache/hadoop/pull/3447#issuecomment-920992397 @iwasakims this is being fixed via #3441 - do you want to leave your comment there instead? thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651800) Time Spent: 40m (was: 0.5h) > Fix compilation error of ITUseHadoopCodecs with -DskipShade > --- > > Key: HADOOP-17916 > URL: https://issues.apache.org/jira/browse/HADOOP-17916 > Project: Hadoop Common > Issue Type: Bug >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-client-integration-tests: Compilation > failure: Compilation failure: > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[34,28] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: package org.apache.hadoop.io > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,16] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[100,54] > package RandomDatum does not exist > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[103,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > [ERROR] > /home/centos/srcs/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseHadoopCodecs.java:[104,7] > cannot find symbol > [ERROR] symbol: class RandomDatum > [ERROR] location: class org.apache.hadoop.example.ITUseHadoopCodecs > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17891) lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
[ https://issues.apache.org/jira/browse/HADOOP-17891?focusedWorklogId=651802&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651802 ] ASF GitHub Bot logged work on HADOOP-17891: --- Author: ASF GitHub Bot Created on: 16/Sep/21 15:14 Start Date: 16/Sep/21 15:14 Worklog Time Spent: 10m Work Description: sunchao commented on a change in pull request #3441: URL: https://github.com/apache/hadoop/pull/3441#discussion_r710217368 ## File path: hadoop-client-modules/hadoop-client-integration-tests/pom.xml ## @@ -191,6 +191,12 @@ test test-jar + + org.apache.hadoop + hadoop-common + test + test-jar Review comment: nit: actually we may merge this with line 173 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651802) Time Spent: 16h 20m (was: 16h 10m) > lz4-java and snappy-java should be excluded from relocation in shaded Hadoop > libraries > -- > > Key: HADOOP-17891 > URL: https://issues.apache.org/jira/browse/HADOOP-17891 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1 >Reporter: L. C. Hsieh >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Attachments: HADOOP-17891-Addendum-01.patch > > Time Spent: 16h 20m > Remaining Estimate: 0h > > lz4-java is a provided dependency. So in the shaded Hadoop libraries, e.g. > hadoop-client-api, if we don't exclude lz4 dependency, the downstream will > still see the exception even they include lz4 dependency. > {code:java} > [info] Cause: java.lang.ClassNotFoundException: > org.apache.hadoop.shaded.net.jpountz.lz4.LZ4Factory > [info] at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > [info] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > [info] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > [info] at > org.apache.hadoop.io.compress.lz4.Lz4Compressor.(Lz4Compressor.java:66) > [info] at > org.apache.hadoop.io.compress.Lz4Codec.createCompressor(Lz4Codec.java:119) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:152) > [info] at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) > {code} > Currently snappy-java is included and relocated in Hadoop shaded client > libraries. But as it includes native methods, it should not be relocated too > due to JNI method resolution. The downstream will see the exception: > {code} > [info] Cause: java.lang.UnsatisfiedLinkError: > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native > Method) > > [info] at > org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151) > > > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282) > [info] at > org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #3447: HADOOP-17916. Fix compilation error of ITUseHadoopCodecs with -DskipS…
sunchao commented on pull request #3447: URL: https://github.com/apache/hadoop/pull/3447#issuecomment-920992397 @iwasakims this is being fixed via #3441 - do you want to leave your comment there instead? thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17123) remove guava Preconditions from Hadoop-common module
[ https://issues.apache.org/jira/browse/HADOOP-17123?focusedWorklogId=651799&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651799 ] ASF GitHub Bot logged work on HADOOP-17123: --- Author: ASF GitHub Bot Created on: 16/Sep/21 15:13 Start Date: 16/Sep/21 15:13 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2134: URL: https://github.com/apache/hadoop/pull/2134#issuecomment-920991665 (oh, and for merging, I'd propose two commits: one to create the new Predicate class, one to use). Why so? Makes it straightforward to cherrypick the predicate class across all old versions so that other cherrypicked code can use it, without worrying about the big merge) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651799) Time Spent: 0.5h (was: 20m) > remove guava Preconditions from Hadoop-common module > > > Key: HADOOP-17123 > URL: https://issues.apache.org/jira/browse/HADOOP-17123 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Replace guava Preconditions by internal implementations that rely on java8+ > APIs in the hadoop -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2134: HADOOP-17123. remove guava Preconditions from Hadoop-common module
steveloughran commented on pull request #2134: URL: https://github.com/apache/hadoop/pull/2134#issuecomment-920991665 (oh, and for merging, I'd propose two commits: one to create the new Predicate class, one to use). Why so? Makes it straightforward to cherrypick the predicate class across all old versions so that other cherrypicked code can use it, without worrying about the big merge) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2134: HADOOP-17123. remove guava Preconditions from Hadoop-common module
steveloughran commented on pull request #2134: URL: https://github.com/apache/hadoop/pull/2134#issuecomment-920990335 revisiting this...we should get this in so it can be backported and we can keep guava out of future patches. is the name 'noguava' the right one? 1. should we try and be clever and use a name a bit like guava but different (Remember: guava is derived from java) 2. or just stick in some package under util *and declare private*, e.g `o.a.hadoop.utils.internal`. We don't want anything external using these -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3448: HDFS-11045. Make TestDirectoryScanner#testThrottling() not based on timing
hadoop-yetus commented on pull request #3448: URL: https://github.com/apache/hadoop/pull/3448#issuecomment-920990250 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 23s | | trunk passed | | +1 :green_heart: | compile | 1m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 20s | | the patch passed | | +1 :green_heart: | compile | 1m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 22s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 23 unchanged - 0 fixed = 27 total (was 23) | | +1 :green_heart: | mvnsite | 1m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 32m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 233m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 353m 42s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3448 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ba2eec6f4cb6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 1b74df0030ca55a1266e8d5b42f5d2f71c665739 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/testReport/ | | Max. process+thread count | 2922 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3448/1/console | | versions | git=2
[jira] [Commented] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416169#comment-17416169 ] Janus Chow commented on HADOOP-16254: - Added a patch based on the implementation of [^HADOOP-16254.002.patch]. Using the config of "hadoop.proxyservers" to validate the qualified proxy servers (Routers). [~hexiaoqiao] [~daryn] Could you help to check? > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch, HADOOP-16254.005.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416169#comment-17416169 ] Janus Chow edited comment on HADOOP-16254 at 9/16/21, 3:10 PM: --- Added a patch ([^HADOOP-16254.005.patch]) based on the implementation of [^HADOOP-16254.002.patch]. Using the config of "hadoop.proxyservers" to validate the qualified proxy servers (Routers). [~hexiaoqiao] [~daryn] Could you help to check? was (Author: symious): Added a patch based on the implementation of [^HADOOP-16254.002.patch]. Using the config of "hadoop.proxyservers" to validate the qualified proxy servers (Routers). [~hexiaoqiao] [~daryn] Could you help to check? > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch, HADOOP-16254.005.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janus Chow updated HADOOP-16254: Attachment: HADOOP-16254.005.patch > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch, HADOOP-16254.005.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17917) Backport HADOOP-15993 to branch-3.2 which Address CVE-2014-4611
[ https://issues.apache.org/jira/browse/HADOOP-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-17917: -- Description: Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, which is vulnerable. ([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611]) cc./ [~aajisaka] was:Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, which is vulnerable. ([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611]) > Backport HADOOP-15993 to branch-3.2 which Address CVE-2014-4611 > --- > > Key: HADOOP-17917 > URL: https://issues.apache.org/jira/browse/HADOOP-17917 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > > Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, > which is vulnerable. > ([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611]) > > cc./ [~aajisaka] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17917) Backport HADOOP-15993 to branch-3.2 which Address CVE-2014-4611
Brahma Reddy Battula created HADOOP-17917: - Summary: Backport HADOOP-15993 to branch-3.2 which Address CVE-2014-4611 Key: HADOOP-17917 URL: https://issues.apache.org/jira/browse/HADOOP-17917 Project: Hadoop Common Issue Type: Bug Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Now the version is 0.8.2.1 and it has net.jpountz.lz4:lz4:1.2.0 dependency, which is vulnerable. ([https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4611]) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17195) Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs
[ https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=651731&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651731 ] ASF GitHub Bot logged work on HADOOP-17195: --- Author: ASF GitHub Bot Created on: 16/Sep/21 14:19 Start Date: 16/Sep/21 14:19 Worklog Time Spent: 10m Work Description: mehakmeet commented on pull request #3446: URL: https://github.com/apache/hadoop/pull/3446#issuecomment-920945749 CC: @steveloughran @mukund-thakur -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651731) Time Spent: 4h 40m (was: 4.5h) > Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs > --- > > Key: HADOOP-17195 > URL: https://issues.apache.org/jira/browse/HADOOP-17195 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: abfsactive, pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > OutOfMemory error due to new ThreadPools being made each time > AbfsOutputStream is created. Since threadPool aren't limited a lot of data is > loaded in buffer and thus it causes OutOfMemory error. > Possible fixes: > - Limit the number of ThreadCounts while performing hdfs copyFromLocal (Using > -t property). > - Reducing OUTPUT_BUFFER_SIZE significantly which would limit the amount of > buffer to be loaded in threads. > - Don't create new ThreadPools each time AbfsOutputStream is created and > limit the number of ThreadPools each AbfsOutputStream could create. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on pull request #3446: HADOOP-17195. OutOfMemory error while uploading huge files to ABFS
mehakmeet commented on pull request #3446: URL: https://github.com/apache/hadoop/pull/3446#issuecomment-920945749 CC: @steveloughran @mukund-thakur -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)
[ https://issues.apache.org/jira/browse/HADOOP-13887?focusedWorklogId=651724&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651724 ] ASF GitHub Bot logged work on HADOOP-13887: --- Author: ASF GitHub Bot Created on: 16/Sep/21 14:17 Start Date: 16/Sep/21 14:17 Worklog Time Spent: 10m Work Description: mehakmeet commented on pull request #3292: URL: https://github.com/apache/hadoop/pull/3292#issuecomment-920943468 javac error seems unrelated to the patch and check styles is as discussed indentations. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651724) Time Spent: 13h 20m (was: 13h 10m) > Encrypt S3A data client-side with AWS SDK (S3-CSE) > -- > > Key: HADOOP-13887 > URL: https://issues.apache.org/jira/browse/HADOOP-13887 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Jeeyoung Kim >Assignee: Mehakmeet Singh >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, > HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, > HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, > HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, > HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, > HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, > HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf > > Time Spent: 13h 20m > Remaining Estimate: 0h > > Expose the client-side encryption option documented in Amazon S3 > documentation - > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html > When backporting, include HADOOP-17817 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on pull request #3292: HADOOP-13887, HADOOP-17817, HADOOP-17823. Support S3 client side encryption (S3-CSE) using AWS-SDK
mehakmeet commented on pull request #3292: URL: https://github.com/apache/hadoop/pull/3292#issuecomment-920943468 javac error seems unrelated to the patch and check styles is as discussed indentations. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3429: HDFS-16227. De-flake TestMover#testMoverWithStripedFile
hadoop-yetus commented on pull request #3429: URL: https://github.com/apache/hadoop/pull/3429#issuecomment-920943127 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 38m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 27s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 33 unchanged - 1 fixed = 33 total (was 34) | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 36m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 356m 45s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 491m 53s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3429 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 69d7c950d9db 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3ddebe03089a1a01d32dc89ecce42e9c02809206 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/5/testReport/ | | Max. process+thread count | 2050 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3429/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated.
[jira] [Work logged] (HADOOP-17873) ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException
[ https://issues.apache.org/jira/browse/HADOOP-17873?focusedWorklogId=651707&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-651707 ] ASF GitHub Bot logged work on HADOOP-17873: --- Author: ASF GitHub Bot Created on: 16/Sep/21 14:06 Start Date: 16/Sep/21 14:06 Worklog Time Spent: 10m Work Description: sumangala-patki commented on a change in pull request #3341: URL: https://github.com/apache/hadoop/pull/3341#discussion_r710153843 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/oauth2/RetryTestTokenProvider.java ## @@ -30,30 +30,34 @@ */ public class RetryTestTokenProvider implements CustomTokenProviderAdaptee { - // Need to track first token fetch otherwise will get counted as a retry too. - private static boolean isThisFirstTokenFetch = true; - public static int reTryCount = 0; + private static final Logger LOG = LoggerFactory.getLogger( + RetryTestTokenProvider.class); - private static final Logger LOG = LoggerFactory - .getLogger(RetryTestTokenProvider.class); + // Need to track first token fetch otherwise will get counted as a retry too. + private boolean isThisFirstTokenFetch = true; + private int retryCount = 0; @Override public void initialize(Configuration configuration, String accountName) throws IOException { } - public static void ResetStatusToFirstTokenFetch() { + /** + * Clear earlier retry details and reset RetryTestTokenProvider instance to + * state of first access token fetch call Review comment: added When parallel is set to "classes" (instead of "both"), the StreamStats test passes even with the failure scenario induced by dummy test, as the two tests are run sequentially in the process. However, I guess there might be occasional failures if a different class reads/writes simultaneously in a different process; could not reproduce failure though. Have made a minor correction to the pom file; the test will be excluded from the parallel run (currently classesandMethods/both) and executed separately along with a bunch of other integration tests that need to be run sequentially -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 651707) Time Spent: 5h (was: 4h 50m) > ABFS: Fix transient failures in ITestAbfsStreamStatistics and > ITestAbfsRestOperationException > - > > Key: HADOOP-17873 > URL: https://issues.apache.org/jira/browse/HADOOP-17873 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 5h > Remaining Estimate: 0h > > To address transient failures in the following test classes: > * ITestAbfsStreamStatistics: Uses a filesystem level instance to record > read/write statistics, which also tracks these operations in other tests. > running parallelly. To be marked for sequential run only to avoid transient > failure > * ITestAbfsRestOperationException: The use of a static member to track retry > count causes transient failures when two tests of this class happen to run > together. Switch to non-static variable for assertions on retry count -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sumangala-patki commented on a change in pull request #3341: HADOOP-17873. ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException
sumangala-patki commented on a change in pull request #3341: URL: https://github.com/apache/hadoop/pull/3341#discussion_r710153843 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/oauth2/RetryTestTokenProvider.java ## @@ -30,30 +30,34 @@ */ public class RetryTestTokenProvider implements CustomTokenProviderAdaptee { - // Need to track first token fetch otherwise will get counted as a retry too. - private static boolean isThisFirstTokenFetch = true; - public static int reTryCount = 0; + private static final Logger LOG = LoggerFactory.getLogger( + RetryTestTokenProvider.class); - private static final Logger LOG = LoggerFactory - .getLogger(RetryTestTokenProvider.class); + // Need to track first token fetch otherwise will get counted as a retry too. + private boolean isThisFirstTokenFetch = true; + private int retryCount = 0; @Override public void initialize(Configuration configuration, String accountName) throws IOException { } - public static void ResetStatusToFirstTokenFetch() { + /** + * Clear earlier retry details and reset RetryTestTokenProvider instance to + * state of first access token fetch call Review comment: added When parallel is set to "classes" (instead of "both"), the StreamStats test passes even with the failure scenario induced by dummy test, as the two tests are run sequentially in the process. However, I guess there might be occasional failures if a different class reads/writes simultaneously in a different process; could not reproduce failure though. Have made a minor correction to the pom file; the test will be excluded from the parallel run (currently classesandMethods/both) and executed separately along with a bunch of other integration tests that need to be run sequentially -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3440: ABFS: Support for Encryption Context
hadoop-yetus commented on pull request #3440: URL: https://github.com/apache/hadoop/pull/3440#issuecomment-920926715 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 36s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | | trunk passed | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 17s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 18s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-azure in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 18s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-azure in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 16s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-azure in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 16s | [/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-azure in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 8 new + 7 unchanged - 0 fixed = 15 total (was 7) | | -1 :x: | mvnsite | 0m 18s | [/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt) | hadoop-azure in the patch failed. | | -1 :x: | javadoc | 0m 21s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | javadoc | 0m 19s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3440/2/artifact/out/results-java