[jira] [Created] (HDFS-17125) Method checkAndUpdate should also resolve duplicate replicas when memBlockInfo.metadataExists() return false
farmmamba created HDFS-17125: Summary: Method checkAndUpdate should also resolve duplicate replicas when memBlockInfo.metadataExists() return false Key: HDFS-17125 URL: https://issues.apache.org/jira/browse/HDFS-17125 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 3.4.0 Reporter: farmmamba Assignee: farmmamba In method FsDatasetImpl#checkAndUpdate, there is below code snippet: {code:java} if (memBlockInfo.blockDataExists()) { if (memBlockInfo.getBlockURI().compareTo(diskFile.toURI()) != 0) { if (diskMetaFileExists) { if (memBlockInfo.metadataExists()) { // We have two sets of block+meta files. Decide which one to // keep. ReplicaInfo diskBlockInfo = new ReplicaBuilder(ReplicaState.FINALIZED) .setBlockId(blockId) .setLength(diskFile.length()) .setGenerationStamp(diskGS) .setFsVolume(vol) .setDirectoryToUse(diskFile.getParentFile()) .build(); ((FsVolumeImpl) vol).resolveDuplicateReplicas(bpid, memBlockInfo, diskBlockInfo, volumeMap); } } else { // . } if (!fileIoProvider.delete(vol, diskFile)) { LOG.warn("Failed to delete " + diskFile); } } } } {code} It does resolveDuplicateReplicas when memBlockInfo.metadataExists() returns true. Do we need to add some logic here when memBlockInfo.metadataExists() returns false? -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/525/ [Jul 24, 2023, 6:38:00 AM] (github) HDFS-17112. Show decommission duration in JMX and HTML. (#5866). Contributed by Shuyan Zhang. [Jul 24, 2023, 11:56:23 AM] (github) HDFS-17119. RBF: Logger fix for StateStoreMySQLImpl. (#5882). Contributed by Zhaohui Wang. [Jul 24, 2023, 1:40:36 PM] (github) MAPREDUCE-7442. Exception message is not intusive when accessing the job configuration web UI (#5848) [Jul 24, 2023, 6:36:57 PM] (github) HADOOP-18805. S3A prefetch tests to work with small files (#5851) [Jul 24, 2023, 9:34:49 PM] (github) HADOOP-18823. Add Labeler Github Action. (#5874). Contributed by Ayush Saxena. -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1099/ No changes ERROR: File 'out/email-report.txt' does not exist - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Signing releases using automated release infra
Yep, thirdparty could be a good candidate to try, building thirdparty release is relatively easy as well -Ayush On Thu, 20 Jul 2023 at 15:25, Steve Loughran wrote: > > > could be good. > > why not set it up for the third-party module first to see how well it works? > > On Tue, 18 Jul 2023 at 21:05, Ayush Saxena wrote: >> >> Something we can explore as well!! >> >> -Ayush >> >> Begin forwarded message: >> >> > From: Volkan Yazıcı >> > Date: 19 July 2023 at 1:24:49 AM IST >> > To: d...@community.apache.org >> > Subject: Signing releases using automated release infra >> > Reply-To: d...@community.apache.org >> > >> > Abstract: Signing release artifacts using an automated release >> > infrastructure has been officially approved by LEGAL. This enables >> > projects to sign artifacts using, say, GitHub Actions. >> > >> > I have been trying to overhaul the Log4j release process and make it >> > as frictionless as possible since last year. As a part of that effort, >> > I wanted to sign artifacts in CI during deployment and in a >> > `members@a.o` thread[0] I explained how one can do that securely with >> > the help of Infra. That was in December 2022. It has been a long, >> > rough journey, but we succeeded. In this PR[1], Legal has updated the >> > release policy to reflect that this process is officially allowed. >> > Further, Infra put together guides[2][3] to assist projects. Logging >> > Services PMC has already successfully performed 4 Log4j Tools releases >> > using this approach, see its release process[4] for a demonstration. >> > >> > [0] (members only!) >> > https://lists.apache.org/thread/1o12mkjrhyl45f9pof94pskg55vhs61n >> > [1] https://github.com/apache/www-site/pull/235 >> > [2] https://infra.apache.org/release-publishing.html#signing >> > [3] https://infra.apache.org/release-signing.html#automated-release-signing >> > [4] >> > https://github.com/apache/logging-log4j-tools/blob/master/RELEASING.adoc >> > >> > # F.A.Q. >> > >> > ## Why shall a project be interested in this? >> > >> > It greatly simplifies the release process. See Log4j Tools release >> > process[4], probably the simplest among all Java-based ASF projects. >> > >> > ## How can a project get started? >> > >> > 1. Make sure your project builds are reproducible (otherwise there is >> > no way PMC can verify the integrity of CI-produced and -signed >> > artifacts) >> > 2. Clone and adapt INFRA-23996 (GPG keys in GitHub secrets) >> > 3. Clone and adapt INFRA-23974 (Nexus creds. in GitHub secrets for >> > snapshot deployments) >> > 4. Clone and adapt INFRA-24051 (Nexus creds. in GitHub secrets for >> > staging deployments) >> > >> > You might also want to check this[5] GitHub Action workflow for >> > inspiration. >> > >> > [5] >> > https://github.com/apache/logging-log4j-tools/blob/master/.github/workflows/build.yml >> > >> > ## Does the "automated release infrastructure" (CI) perform the full >> > release? >> > >> > No. CI *only* uploads signed artifacts to Nexus. The release manager >> > (RM) still needs to copy the CI-generated files to SVN, PMC needs to >> > vote, and, upon consensus, RM needs to "close" the release in Nexus >> > and so on. >> > >> > - >> > To unsubscribe, e-mail: dev-unsubscr...@community.apache.org >> > For additional commands, e-mail: dev-h...@community.apache.org >> > - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/ [Jul 24, 2023, 6:38:00 AM] (github) HDFS-17112. Show decommission duration in JMX and HTML. (#5866). Contributed by Shuyan Zhang. [Jul 24, 2023, 11:56:23 AM] (github) HDFS-17119. RBF: Logger fix for StateStoreMySQLImpl. (#5882). Contributed by Zhaohui Wang. [Jul 24, 2023, 1:40:36 PM] (github) MAPREDUCE-7442. Exception message is not intusive when accessing the job configuration web UI (#5848) [Jul 24, 2023, 6:36:57 PM] (github) HADOOP-18805. S3A prefetch tests to work with small files (#5851) [Jul 24, 2023, 9:34:49 PM] (github) HADOOP-18823. Add Labeler Github Action. (#5874). Contributed by Ayush Saxena. -1 overall The following subsystems voted -1: blanks hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.server.namenode.ha.TestObserverNode hadoop.mapreduce.v2.TestUberAM hadoop.mapreduce.v2.TestMRJobsWithProfiler hadoop.mapreduce.v2.TestMRJobs cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-compile-javac-root.txt [12K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/blanks-eol.txt [15M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-hadolint.txt [20K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/results-javadoc-javadoc-root.txt [244K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1298/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [72K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17123) Sort datanodeStorages when generating StorageBlockReport[] in method BPServiceActor#blockReport for future convenience
farmmamba created HDFS-17123: Summary: Sort datanodeStorages when generating StorageBlockReport[] in method BPServiceActor#blockReport for future convenience Key: HDFS-17123 URL: https://issues.apache.org/jira/browse/HDFS-17123 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 3.4.0 Reporter: farmmamba -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17122) Rectify the table length discrepancy in the DataNode UI.
Hualong Zhang created HDFS-17122: Summary: Rectify the table length discrepancy in the DataNode UI. Key: HDFS-17122 URL: https://issues.apache.org/jira/browse/HDFS-17122 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Attachments: image-2023-07-25-18-12-10-582.png !image-2023-07-25-18-12-10-582.png|width=580,height=231! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-17121) BPServiceActor to provide new thread to handle FBR
liuguanghua created HDFS-17121: -- Summary: BPServiceActor to provide new thread to handle FBR Key: HDFS-17121 URL: https://issues.apache.org/jira/browse/HDFS-17121 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Reporter: liuguanghua # After HDFS-16016 , it makes ibr in a thread to avoid heartbeat blocking with ibr when require readlock in Datanode. # Now fbr should do as this. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org