Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/207/ No changes - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/584/ [Jul 29, 2021 3:19:17 AM] (Xiaoqiao He) HDFS-15175. Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog. Contributed by Wan Chang. [Jul 29, 2021 4:57:28 AM] (noreply) HDFS-15936.Solve SocketTimeoutException#sendPacket() does not record SocketTimeout exception. (#2836) [Jul 29, 2021 9:25:39 AM] (noreply) YARN-10841. Fix token reset synchronization for UAM response token. (#3194) [Jul 29, 2021 11:43:40 AM] (Szilard Nemeth) YARN-10628. Add node usage metrics in SLS. Contributed by Vadaga Ananyo Rao [Jul 29, 2021 3:37:40 PM] (Szilard Nemeth) YARN-10663. Add runningApps stats in SLS. Contributed by Vadaga Ananyo Rao [Jul 29, 2021 3:56:14 PM] (noreply) YARN-10869. CS considers only the default maximum-allocation-mb/vcore property as a maximum when it creates dynamic queues (#3225) [Jul 29, 2021 5:15:27 PM] (noreply) YARN-10856. Prevent ATS v2 health check REST API call if the ATS service itself is disabled. (#3236) [Jul 29, 2021 5:50:57 PM] (noreply) HADOOP-17815. Run CI for Centos 7 (#3231) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/ [Jul 28, 2021 4:59:00 AM] (noreply) HDFS-16145. CopyListing fails with FNF exception with snapshot diff. (#3234) [Jul 28, 2021 10:18:04 AM] (noreply) HDFS-16137.Improve the comments related to FairCallQueue#queues. (#3226) [Jul 28, 2021 12:49:10 PM] (Szilard Nemeth) YARN-10727. ParentQueue does not validate the queue on removal. Contributed by Andras Gyori [Jul 28, 2021 1:49:23 PM] (Stephen O'Donnell) HDFS-16144. Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions). Contributed by Stephen O'Donnell [Jul 28, 2021 2:34:43 PM] (noreply) HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled (#3239) [Jul 28, 2021 2:50:14 PM] (Szilard Nemeth) YARN-10790. CS Flexible AQC: Add separate parent and leaf template property. Contributed by Andras Gyori [Jul 28, 2021 3:02:15 PM] (Szilard Nemeth) YARN-6272. TestAMRMClient#testAMRMClientWithContainerResourceChange fails intermittently. Contributed by Andras Gyory & Prabhu Joseph [Jul 28, 2021 5:10:07 PM] (noreply) HADOOP-17814. Provide fallbacks for identity/cost providers and backoff enable (#3230) [Jul 28, 2021 7:22:58 PM] (noreply) HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up configuration values (#3221) [Jul 28, 2021 10:37:56 PM] (Konstantin Shvachko) HADOOP-17819. Add extensions to ProtobufRpcEngine RequestHeaderProto. Contributed by Hector Sandoval Chaverri. (#3242) -1 overall The following subsystems voted -1: asflicense blanks pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.server.balancer.TestBalancerService hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks hadoop.hdfs.server.balancer.TestBalancer hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId hadoop.hdfs.server.mover.TestStorageMover hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobConf hadoop.hdfs.server.federation.router.TestRouterFsck hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefresh hadoop.hdfs.server.federation.router.TestRouterFederationRename hadoop.hdfs.server.federation.router.TestRouterWebHdfsMethods hadoop.hdfs.server.federation.router.TestRouterMountTable hadoop.tools.dynamometer.TestDynamometerInfra hadoop.tools.dynamometer.TestDynamometerInfra cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/artifact/out/results-compile-javac-root.txt [364K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/artifact/out/results-checkstyle-root.txt [14M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/583/artifact/out/results-pathlen.txt [16K]
[jira] [Created] (HDFS-16147) load fsimage with parallelization and compression
liuyongpan created HDFS-16147: - Summary: load fsimage with parallelization and compression Key: HDFS-16147 URL: https://issues.apache.org/jira/browse/HDFS-16147 Project: Hadoop HDFS Issue Type: Improvement Components: namanode Affects Versions: 3.3.0 Reporter: liuyongpan Fix For: 3.3.0 In HDFS-14617, it allows the inode and inode directory sections of the fsimage to be loaded in parallel. But it can't turn on parallelism and compression at the same time. I fixed this defect. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/ [Jul 28, 2021 11:04:46 PM] (Konstantin Shvachko) HADOOP-17819. Add extensions to ProtobufRpcEngine RequestHeaderProto. Contributed by Hector Sandoval Chaverri. (#3242) -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestTrash hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestDFSClientRetries hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.tools.TestDistCpSystem hadoop.yarn.sls.appmaster.TestAMSimulator hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-compile-javac-root.txt [496K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-mvnsite-root.txt [584K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-patch-pylint.txt [48K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-patch-shelldocs.txt [48K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/diff-javadoc-javadoc-root.txt [20K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [236K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [40K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [112K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [96K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/374/artifact/out/patch-unit-hadoop-to
[jira] [Created] (HDFS-16146) All three replicas are lost due to not adding a new DataNode in time
Shuyan Zhang created HDFS-16146: --- Summary: All three replicas are lost due to not adding a new DataNode in time Key: HDFS-16146 URL: https://issues.apache.org/jira/browse/HDFS-16146 Project: Hadoop HDFS Issue Type: Bug Components: datanode, hdfs Reporter: Shuyan Zhang Assignee: Shuyan Zhang We have a three-replica file, and all replicas of a block are lost when the default datanode replacement strategy is used. It happened like this: 1. addBlock() applies for a new block and successfully connects three datanodes (dn1, dn2 and dn3) to build a pipeline; 2. Write data; 3. dn1 has an error and was kicked out. At this time, the remaining datanodes in the pipeline > 1, according to the replacement strategy, there is no need to add a new datanode; 4. After writing is completed, enter PIPELINE_CLOSE; 5. dn2 has an error and was kicked out. But because it is already in the close phase, addDatanode2ExistingPipeline() decides to hand over the task of transfering the replica to the NameNode. At this time, there is only one datanode left in the pipeline; 6. dn3 error, all replicas are lost. If we add a new datanode in step 5, we can avoid losing all replicas in this case. I think error in PIPELINE_CLOSE and error in DATA_STREAMING have the same risk of losing replicas, we should not skip adding a new datanode during PIPELINE_CLOSE. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15936) Solve BlockSender#sendPacket() does not record SocketTimeout exception
[ https://issues.apache.org/jira/browse/HDFS-15936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-15936. Fix Version/s: 3.3.2 3.4.0 Resolution: Fixed Thanks! > Solve BlockSender#sendPacket() does not record SocketTimeout exception > -- > > Key: HDFS-15936 > URL: https://issues.apache.org/jira/browse/HDFS-15936 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > In BlockSender#sendPacket(), if a SocketTimeout exception occurs, no > information is recorded here. > try { >.. > } catch (IOException e) { >if (e instanceof SocketTimeoutException) { > /* > * writing to client timed out. This happens if the client reads > * part of a block and then decides not to read the rest (but leaves > * the socket open). > * > * Reporting of this case is done in DataXceiver#run > */ >} > } > No records are generated here, which is not conducive to troubleshooting. > We should add a line of warning type log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org