[jira] [Resolved] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes
[ https://issues.apache.org/jira/browse/HDFS-16540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HDFS-16540. -- Hadoop Flags: Reviewed Resolution: Fixed Merged to branch-3.3. and to trunk. > Data locality is lost when DataNode pod restarts in kubernetes > --- > > Key: HDFS-16540 > URL: https://issues.apache.org/jira/browse/HDFS-16540 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.3.2 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.4 > > Time Spent: 7h > Remaining Estimate: 0h > > We have HBase RegionServer and Hdfs DataNode running in one pod. When the pod > restarts, we found that data locality is lost after we do a major compaction > of hbase regions. After some debugging, we found that upon pod restarts, its > ip changes. In DatanodeManager, maps like networktopology are updated with > the new info. host2DatanodeMap is not updated accordingly. When hdfs client > with the new ip tries to find a local DataNode, it fails. > -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.3.3 (RC1)
+1(binding) * Verified signature and checksum of the source tarball. * Built the source code on Ubuntu and OpenJDK 11 by `mvn clean package -DskipTests -Pnative -Pdist -Dtar`. * Setup pseudo cluster with HDFS and YARN. * Run simple FsShell - mkdir/put/get/mv/rm and check the result. * Run example mr applications and check the result - Pi & wordcount. * Check the Web UI of NameNode/DataNode/Resourcemanager/NodeManager etc. Thanks Steve for your work. - He Xiaoqiao On Mon, May 16, 2022 at 4:25 AM Viraj Jasani wrote: > > +1 (non-binding) > > * Signature: ok > * Checksum : ok > * Rat check (1.8.0_301): ok > - mvn clean apache-rat:check > * Built from source (1.8.0_301): ok > - mvn clean install -DskipTests > * Built tar from source (1.8.0_301): ok > - mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true > > HDFS, MapReduce and HBase (2.5) CRUD functional testing on > pseudo-distributed mode looks good. > > > On Wed, May 11, 2022 at 10:26 AM Steve Loughran > wrote: > > > I have put together a release candidate (RC1) for Hadoop 3.3.3 > > > > The RC is available at: > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/ > > > > The git tag is release-3.3.3-RC1, commit d37586cbda3 > > > > The maven artifacts are staged at > > https://repository.apache.org/content/repositories/orgapachehadoop-1349/ > > > > You can find my public key at: > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > Change log > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/CHANGELOG.md > > > > Release notes > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/RELEASENOTES.md > > > > There's a very small number of changes, primarily critical code/packaging > > issues and security fixes. > > > > * The critical fixes which shipped in the 3.2.3 release. > > * CVEs in our code and dependencies > > * Shaded client packaging issues. > > * A switch from log4j to reload4j > > > > reload4j is an active fork of the log4j 1.17 library with the classes > > which contain CVEs removed. Even though hadoop never used those classes, > > they regularly raised alerts on security scans and concen from users. > > Switching to the forked project allows us to ship a secure logging > > framework. It will complicate the builds of downstream > > maven/ivy/gradle projects which exclude our log4j artifacts, as they > > need to cut the new dependency instead/as well. > > > > See the release notes for details. > > > > This is the second release attempt. It is the same git commit as before, > > but > > fully recompiled with another republish to maven staging, which has bee > > verified by building spark, as well as a minimal test project. > > > > Please try the release and vote. The vote will run for 5 days. > > > > -Steve > > - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/303/ [May 13, 2022 6:28:53 AM] (noreply) HDFS-14750. RBF: Support dynamic handler allocation in routers (#4199) [May 13, 2022 6:44:41 AM] (noreply) Revert "HDFS-14750. RBF: Support dynamic handler allocation in routers (#4199)" (#4306) [May 13, 2022 11:16:12 AM] (Benjamin Teke) YARN-11123. ResourceManager webapps test failures due to org.apache.hadoop.metrics2.MetricsException and subsequent java.net.BindException: Address already in use. Contributed by Szilard Nemeth [May 13, 2022 4:11:42 PM] (noreply) YARN-11073. Avoid unnecessary preemption for tiny queues under certain corner cases (#4110) [May 13, 2022 4:34:19 PM] (noreply) MAPREDUCE-7377. Remove unused imports in MapReduce project (#4299) [May 13, 2022 4:41:06 PM] (noreply) YARN-10080. Support show app id on localizer thread pool (#4283) -1 overall The following subsystems voted -1: blanks pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/ No changes -1 overall The following subsystems voted -1: blanks pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.mapred.TestLocalDistributedCacheManager hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-compile-javac-root.txt [340K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-checkstyle-root.txt [14M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-shellcheck.txt [28K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/results-javadoc-javadoc-root.txt [400K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt [48K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/870/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt [20K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.3.3 (RC1)
+1 (non-binding) * Signature: ok * Checksum : ok * Rat check (1.8.0_301): ok - mvn clean apache-rat:check * Built from source (1.8.0_301): ok - mvn clean install -DskipTests * Built tar from source (1.8.0_301): ok - mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true HDFS, MapReduce and HBase (2.5) CRUD functional testing on pseudo-distributed mode looks good. On Wed, May 11, 2022 at 10:26 AM Steve Loughran wrote: > I have put together a release candidate (RC1) for Hadoop 3.3.3 > > The RC is available at: > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/ > > The git tag is release-3.3.3-RC1, commit d37586cbda3 > > The maven artifacts are staged at > https://repository.apache.org/content/repositories/orgapachehadoop-1349/ > > You can find my public key at: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > Change log > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/CHANGELOG.md > > Release notes > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/RELEASENOTES.md > > There's a very small number of changes, primarily critical code/packaging > issues and security fixes. > > * The critical fixes which shipped in the 3.2.3 release. > * CVEs in our code and dependencies > * Shaded client packaging issues. > * A switch from log4j to reload4j > > reload4j is an active fork of the log4j 1.17 library with the classes > which contain CVEs removed. Even though hadoop never used those classes, > they regularly raised alerts on security scans and concen from users. > Switching to the forked project allows us to ship a secure logging > framework. It will complicate the builds of downstream > maven/ivy/gradle projects which exclude our log4j artifacts, as they > need to cut the new dependency instead/as well. > > See the release notes for details. > > This is the second release attempt. It is the same git commit as before, > but > fully recompiled with another republish to maven staging, which has bee > verified by building spark, as well as a minimal test project. > > Please try the release and vote. The vote will run for 5 days. > > -Steve >
Re: [VOTE] Release Apache Hadoop 3.3.3 (RC1)
+1 (binding) * verified signature and checksum of the source tarball. * built the source code on Rocky Linux 8 (x86_64) and OpenJDK 8 by `mvn install -DskipTests -Pnative -Pdist`. * launched pseudo distributed cluster with Kerberos security enabled and ran sample MR jobs. * launched HA enabled 3-nodes docker cluster and ran sample MR jobs. * built site documentation by `mvn site site:stage -Preleasedocs` and skimmed the contents. * built Spark 3.2.1 against 3.3.3 RC1 using the staging repository (after `rm -rf ~/.m2/repository/org/apache/hadoop`). Thanks, Masatake Iwasaki On 2022/05/12 2:25, Steve Loughran wrote: I have put together a release candidate (RC1) for Hadoop 3.3.3 The RC is available at: https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/ The git tag is release-3.3.3-RC1, commit d37586cbda3 The maven artifacts are staged at https://repository.apache.org/content/repositories/orgapachehadoop-1349/ You can find my public key at: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS Change log https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/CHANGELOG.md Release notes https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC1/RELEASENOTES.md There's a very small number of changes, primarily critical code/packaging issues and security fixes. * The critical fixes which shipped in the 3.2.3 release. * CVEs in our code and dependencies * Shaded client packaging issues. * A switch from log4j to reload4j reload4j is an active fork of the log4j 1.17 library with the classes which contain CVEs removed. Even though hadoop never used those classes, they regularly raised alerts on security scans and concen from users. Switching to the forked project allows us to ship a secure logging framework. It will complicate the builds of downstream maven/ivy/gradle projects which exclude our log4j artifacts, as they need to cut the new dependency instead/as well. See the release notes for details. This is the second release attempt. It is the same git commit as before, but fully recompiled with another republish to maven staging, which has bee verified by building spark, as well as a minimal test project. Please try the release and vote. The vote will run for 5 days. -Steve - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/diff-compile-javac-root.txt [476K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-mvnsite-root.txt [560K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [216K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [432K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [116K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/662/artifact/out/patch-unit-hadoop-tools_hadoop-sls.t
[jira] [Resolved] (HDFS-16579) Fix build failure for TestBlockManager on branch-3.2
[ https://issues.apache.org/jira/browse/HDFS-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-16579. - Fix Version/s: 3.2.4 Resolution: Fixed > Fix build failure for TestBlockManager on branch-3.2 > > > Key: HDFS-16579 > URL: https://issues.apache.org/jira/browse/HDFS-16579 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Tao Li >Assignee: Tao Li >Priority: Major > Labels: pull-request-available > Fix For: 3.2.4 > > Time Spent: 50m > Remaining Estimate: 0h > > Fix build failure for TestBlockManager on branch-3.2. See HDFS-16552. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org