[jira] [Resolved] (HDFS-17059) Optimize chooseReplicaToDelete method to prevent from calling pickupReplicaSet repeatedly
[ https://issues.apache.org/jira/browse/HDFS-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] farmmamba resolved HDFS-17059. -- Resolution: Not A Problem > Optimize chooseReplicaToDelete method to prevent from calling > pickupReplicaSet repeatedly > - > > Key: HDFS-17059 > URL: https://issues.apache.org/jira/browse/HDFS-17059 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: farmmamba >Assignee: farmmamba >Priority: Trivial > Labels: pull-request-available > > We should optimize chooseReplicaToDelete method to prevent from calling > pickupReplicaSet repeatedly, just like HDFS-17053. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.3.6 RC1
Thanks all! The vote passed with 6 binding +1 votes, no +0, -1 votes and 4 non-binding +1 votes. Publishing the release bits and updating webpage and user docs now. Thanks to the binding votes from Ayush, Xiaoqiao, Sammi, Mukund, Masatake and non-binding votes from Nilotpal, Viraj, Stephen, George and Ahmar. On Fri, Jun 23, 2023 at 11:48 PM Ayush Saxena wrote: > +1 (Binding) > > * Built from source (x86 & Arm) > * Successful native build on ubuntu 18.04(x86) & ubuntu 20.04(Arm) > * Verified Checksums (x86 & Arm) > * Verified Signatures (x86 & Arm) > * Successful RAT check (x86 & Arm) > * Verified the diff b/w the tag & the source tar > * Built Ozone with 3.3.6, green build after a retrigger due to some OOM > issues [1] > * Built Tez with 3.3.6 green build [2] > * Ran basic HDFS shell commands (Fs > Operations/EC/RBF/StoragePolicy/Snapshots) (x86 & Arm) > * Ran some basic Yarn shell commands. > * Browsed through the UI (NN, DN, RM, NM, JHS) (x86 & Arm) > * Ran some example Jobs (TeraGen, TeraSort, TeraValidate, WordCount, > WordMean, Pi) (x86 & Arm) > * Verified the output of `hadoop version` (x86 & Arm) > * Ran some HDFS unit tests around FsOperations/EC/Observer Read/RBF/SPS > * Skimmed over the contents of site jar > * Skimmed over the staging repo. > * Checked the NOTICE & Licence files. > > Thanx Wei-Chiu for driving the release, Good Luck!!! > > -Ayush > > > [1] https://github.com/ayushtkn/hadoop-ozone/actions/runs/5282707769 > [2] https://github.com/apache/tez/pull/285#issuecomment-1590962978 > > On Sat, 24 Jun 2023 at 09:43, Nilotpal Nandi > wrote: > >> +1 (Non-binding). >> Thanks a lot Wei-Chiu for driving it. >> >> Thanks, >> Nilotpal Nandi >> >> On 2023/06/23 21:51:56 Wei-Chiu Chuang wrote: >> > +1 (binding) >> > >> > Note: according to the Hadoop bylaw, release vote is open for 5 days, >> not 7 >> > days. So technically the time is almost up. >> > https://hadoop.apache.org/bylaws#Decision+Making >> > >> > If you plan to cast a vote, please do so soon. In the meantime, I'll >> start >> > to prepare to wrap up the release work. >> > >> > On Fri, Jun 23, 2023 at 6:09 AM Xiaoqiao He >> wrote: >> > >> > > +1(binding) >> > > >> > > * Verified signature and checksum of all source tarballs. >> > > * Built source code on Ubuntu and OpenJDK 11 by `mvn clean package >> > > -DskipTests -Pnative -Pdist -Dtar`. >> > > * Setup pseudo cluster with HDFS and YARN. >> > > * Run simple FsShell - mkdir/put/get/mv/rm and check the result. >> > > * Run example mr applications and check the result - Pi & wordcount. >> > > * Checked the Web UI of NameNode/DataNode/Resourcemanager/NodeManager >> etc. >> > > * Checked git and JIRA using dev-support tools >> > > `git_jira_fix_version_check.py` . >> > > >> > > Thanks WeiChiu for your work. >> > > >> > > NOTE: I believe the build fatal error report from me above is only >> related >> > > to my own environment. >> > > >> > > Best Regards, >> > > - He Xiaoqiao >> > > >> > > On Thu, Jun 22, 2023 at 4:17 PM Chen Yi >> wrote: >> > > >> > > > Thanks Wei-Chiu for leading this effort ! >> > > > >> > > > +1(Binding) >> > > > >> > > > >> > > > + Verified the signature and checksum of all tarballs. >> > > > + Started a web server and viewed documentation site. >> > > > + Built from the source tarball on macOS 12.3 and OpenJDK 8. >> > > > + Launched a pseudo distributed cluster using released binary >> packages, >> > > > done some HDFS dir/file basic opeations. >> > > > + Run grep, pi and wordcount MR tasks on the pseudo cluster. >> > > > >> > > > Bests, >> > > > Sammi Chen >> > > > >> > > > 发件人: Wei-Chiu Chuang >> > > > 发送时间: 2023年6月19日 8:52 >> > > > 收件人: Hadoop Common ; Hdfs-dev < >> > > > hdfs-dev@hadoop.apache.org>; yarn-dev ; >> > > > mapreduce-dev >> > > > 主题: [VOTE] Release Apache Hadoop 3.3.6 RC1 >> > > > >> > > > I am inviting anyone to try and vote on this release candidate. >> > > > >> > > > Note: >> > > > This is exactly the same as RC0, except the CHANGELOG. >> > > > >> > > > The RC is available at: >> > > > https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-amd64/ (for >> amd64) >> > > > https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-arm64/ (for >> arm64) >> > > > >> > > > Git tag: release-3.3.6-RC1 >> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.6-RC1 >> > > > >> > > > Maven artifacts is built by x86 machine and are staged at >> > > > >> https://repository.apache.org/content/repositories/orgapachehadoop-1380/ >> > > > >> > > > My public key: >> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS >> > > > >> > > > Changelog: >> > > > >> https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-amd64/CHANGELOG.md >> > > > >> > > > Release notes: >> > > > >> https://home.apache.org/~weichiu/hadoop-3.3.6-RC1-amd64/RELEASENOTES.md >> > > > >> > > > This is a relatively small release (by Hadoop standard) containing >> about >> > > > 120 commits. >> > > > Please give it a try, this RC vote will run for 7
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/510/ [Jun 23, 2023, 11:09:23 AM] (github) YARN-11513: Applications submitted to ambiguous queue fail during recovery if "Specified" Placement Rule is used (#5748) [Jun 23, 2023, 2:45:51 PM] (github) YARN-11498. Exclude jettison from jersey-json artifact as on older version is being pulled (#5623) [Jun 23, 2023, 6:40:03 PM] (Szilard Nemeth) MAPREDUCE-7441. Fix race condition in closing FadvisedFileRegion. Contributed by Benjamin Teke -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1069/ No changes ERROR: File 'out/email-report.txt' does not exist - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/ No changes -1 overall The following subsystems voted -1: blanks hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.mapreduce.v2.TestUberAM hadoop.mapreduce.v2.TestMRJobsWithProfiler hadoop.mapreduce.v2.TestMRJobs cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-compile-javac-root.txt [12K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/blanks-eol.txt [14M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-hadolint.txt [20K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/results-javadoc-javadoc-root.txt [244K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [24K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1268/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [72K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17060) BlockPlacementPolicyDefault#chooseReplicaToDelete should consider datanode load
farmmamba created HDFS-17060: Summary: BlockPlacementPolicyDefault#chooseReplicaToDelete should consider datanode load Key: HDFS-17060 URL: https://issues.apache.org/jira/browse/HDFS-17060 Project: Hadoop HDFS Issue Type: Improvement Reporter: farmmamba When choose extra replicas for deleting, we should consider datanode load as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17059) Optimize chooseReplicaToDelete method to prevent from calling pickupReplicaSet repeatedly
farmmamba created HDFS-17059: Summary: Optimize chooseReplicaToDelete method to prevent from calling pickupReplicaSet repeatedly Key: HDFS-17059 URL: https://issues.apache.org/jira/browse/HDFS-17059 Project: Hadoop HDFS Issue Type: Improvement Reporter: farmmamba We should optimize chooseReplicaToDelete method to prevent from calling pickupReplicaSet repeatedly, just like HDFS-17053. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17058) Some statements in testChooseReplicaToDelete method seems useless
farmmamba created HDFS-17058: Summary: Some statements in testChooseReplicaToDelete method seems useless Key: HDFS-17058 URL: https://issues.apache.org/jira/browse/HDFS-17058 Project: Hadoop HDFS Issue Type: Improvement Reporter: farmmamba Below snippet seems useless in method testChooseReplicaToDelete(). We can drop it. {code:java} storages[4].setRemainingForTests(100 * 1024 * 1024); {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org