[jira] [Resolved] (HDFS-16879) EC : Fsck -blockId shows number of redundant internal block replicas for EC Blocks
[ https://issues.apache.org/jira/browse/HDFS-16879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZanderXu resolved HDFS-16879. - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > EC : Fsck -blockId shows number of redundant internal block replicas for EC > Blocks > -- > > Key: HDFS-16879 > URL: https://issues.apache.org/jira/browse/HDFS-16879 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > For the block of the ec file run hdfs fsck -blockId xxx can add shows number > of redundant internal block replicas. > for example: the current blockgroup has 10 live replicas, and it will show > there are 9 live replicas. > actually, there is a live replica that should be in the redundant state, can > add shows "No. of redundant Replica: 1" > {code:java} > hdfs fsck -blockId blk_-xxx > Block Id: blk_-xxx > Block belongs to: /ec/file1 > No. of Expected Replica: 9 > No. of live Replica: 9 > No. of excess Replica: 0 > No. of stale Replica: 0 > No. of decommissioned Replica: 0 > No. of decommissioning Replica: 0 > No. of corrupted Replica: 0 > Block replica on datanode/rack: ip-xxx1 is HEALTHY > Block replica on datanode/rack: ip-xxx2 is HEALTHY > Block replica on datanode/rack: ip-xxx3 is HEALTHY > Block replica on datanode/rack: ip-xxx4 is HEALTHY > Block replica on datanode/rack: ip-xxx5 is HEALTHY > Block replica on datanode/rack: ip-xxx6 is HEALTHY > Block replica on datanode/rack: ip-xxx7 is HEALTHY > Block replica on datanode/rack: ip-xxx8 is HEALTHY > Block replica on datanode/rack: ip-xxx9 is HEALTHY > Block replica on datanode/rack: ip-xxx10 is HEALTHY > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-16882) Add cache hit rate metric in MountTableResolver#getDestinationForPath
ZhangHB created HDFS-16882: -- Summary: Add cache hit rate metric in MountTableResolver#getDestinationForPath Key: HDFS-16882 URL: https://issues.apache.org/jira/browse/HDFS-16882 Project: Hadoop HDFS Issue Type: Improvement Components: rbf Affects Versions: 3.3.4 Reporter: ZhangHB Currently, the default value of "dfs.federation.router.mount-table.cache.enable" is ture, the default value of "dfs.federation.router.mount-table.max-cache-size" is 1. But there is no metric that display cache hit rate, I think we can add a hit rate metric to watch the Cache performance and better tuning the parameters. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/422/ [Jan 1, 2023, 5:06:33 PM] (github) HADOOP-18586. Update the year to 2023. (#5265). Contributed by Ayush Saxena. [Jan 2, 2023, 2:35:16 PM] (github) YARN-11393. Fs2cs could be extended to set ULF to -1 upon conversion (#5201) -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState)) Redundant null check at ResourceLocalizationService.java:is known to be non-null in
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/ [Jan 2, 2023, 2:35:16 PM] (github) YARN-11393. Fs2cs could be extended to set ULF to -1 upon conversion (#5201) -1 overall The following subsystems voted -1: blanks hadolint pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-mapreduce-project/hadoop-mapreduce-client Write to static field org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:[line 120] spotbugs : module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core Write to static field org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:[line 120] spotbugs : module:hadoop-mapreduce-project Write to static field org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:[line 120] spotbugs : module:root Write to static field org.apache.hadoop.mapreduce.task.reduce.Fetcher.nextId from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:from instance method new org.apache.hadoop.mapreduce.task.reduce.Fetcher(JobConf, TaskAttemptID, ShuffleSchedulerImpl, MergeManager, Reporter, ShuffleClientMetrics, ExceptionReporter, SecretKey) At Fetcher.java:[line 120] Failed junit tests : hadoop.hdfs.server.balancer.TestBalancerService hadoop.hdfs.TestLeaseRecovery2 cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/artifact/out/results-compile-javac-root.txt [528K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/artifact/out/blanks-eol.txt [14M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1094/artifact/out/results-hadolint.txt [8.0K] pathlen:
Re: [VOTE] Release Apache Hadoop 3.3.5
-1, because if I'm understanding the potential impact of HDFS-16853 correctly, then it's serious enough to fix before a release. (I could change my vote if someone wants to make a case that it's not that serious.) Otherwise, this RC was looking good: * Verified all checksums. * Verified all signatures. * Built from source, including native code on Linux. * mvn clean package -Pnative -Psrc -Drequire.openssl -Drequire.snappy -Drequire.zstd -DskipTests * Tests passed. * mvn --fail-never clean test -Pnative -Dparallel-tests -Drequire.snappy -Drequire.zstd -Drequire.openssl -Dsurefire.rerunFailingTestsCount=3 -DtestsThreadCount=8 * Checked dependency tree to make sure we have all of the expected library updates that are mentioned in the release notes. * mvn -o dependency:tree * Farewell, S3Guard. * Confirmed that hadoop-openstack is now just a stub placeholder artifact with no code. * For ARM verification: * Ran "file " on all native binaries in the ARM tarball to confirm they actually came out with ARM as the architecture. * Output of hadoop checknative -a on ARM looks good. * Ran a MapReduce job with the native bzip2 codec for compression, and it worked fine. * Ran a MapReduce job with YARN configured to use LinuxContainerExecutor and verified launching the containers through container-executor worked. My local setup didn't have the test failures mentioned by Viraj, though there was some flakiness with a few HDFS snapshot tests timing out. Regarding Hive and Bouncy Castle, there is an existing issue and pull request tracking an upgrade attempt. It's looking like some amount of code changes are required: https://issues.apache.org/jira/browse/HIVE-26648 https://github.com/apache/hive/pull/3744 Chris Nauroth On Tue, Jan 3, 2023 at 8:57 AM Chao Sun wrote: > Hmm I'm looking at HADOOP-11867 related stuff but couldn't find it > mentioned anywhere in change log or release notes. Are they actually > up-to-date? > > On Mon, Jan 2, 2023 at 7:48 AM Masatake Iwasaki > wrote: > > > > >- building HBase 2.4.13 and Hive 3.1.3 against 3.3.5 failed due to > dependency change. > > > > For HBase, classes under com/sun/jersey/json/* and com/sun/xml/* are not > expected in hbase-shaded-with-hadoop-check-invariants. > > Updating hbase-shaded/pom.xml is expected to be the fix as done in > HBASE-27292. > > > https://github.com/apache/hbase/commit/00612106b5fa78a0dd198cbcaab610bd8b1be277 > > > >[INFO] --- exec-maven-plugin:1.6.0:exec > (check-jar-contents-for-stuff-with-hadoop) @ > hbase-shaded-with-hadoop-check-invariants --- > >[ERROR] Found artifact with unexpected contents: > '/home/rocky/srcs/bigtop/build/hbase/rpm/BUILD/hbase-2.4.13/hbase-shaded/hbase-shaded-client/target/hbase-shaded-client-2.4.13.jar' > >Please check the following and either correct the build or update > >the allowed list with reasoning. > > > >com/ > >com/sun/ > >com/sun/jersey/ > >com/sun/jersey/json/ > >... > > > > > > For Hive, classes belonging to org.bouncycastle:bcprov-jdk15on:1.68 seem > to be problematic. > > Excluding them on hive-jdbc might be the fix. > > > >[ERROR] Failed to execute goal > org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on > project hive-jdbc: Error creating shaded jar: Problem shading JAR > /home/rocky/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.68/bcprov-jdk15on-1.68.jar > entry > META-INF/versions/15/org/bouncycastle/jcajce/provider/asymmetric/edec/SignatureSpi$EdDSA.class: > java.lang.IllegalArgumentException: Unsupported class file major version 59 > -> [Help 1] > >... > > > > > > On 2023/01/02 22:02, Masatake Iwasaki wrote: > > > Thanks for your great effort for the new release, Steve and Mukund. > > > > > > +1 while it would be nice if we can address missed Javadocs. > > > > > > + verified the signature and checksum. > > > + built from source tarball on Rocky Linux 8 and OpenJDK 8 with native > profile enabled. > > >+ launched pseudo distributed cluster including kms and httpfs with > Kerberos and SSL enabled. > > >+ created encryption zone, put and read files via httpfs. > > >+ ran example MR wordcount over encryption zone. > > > + built rpm packages by Bigtop and ran smoke-tests on Rocky Linux 8 > (both x86_64 and aarch64). > > >- building HBase 2.4.13 and Hive 3.1.3 against 3.3.5 failed due to > dependency change. > > > # while building HBase 2.4.13 and Hive 3.1.3 against Hadoop 3.3.4 > worked. > > > + skimmed the site contents. > > >- Javadocs are not contained (under r3.3.5/api). > > > # The issue can be reproduced even if I built site docs from the > source. > > > > > > Masatake Iwasaki > > > > > > On 2022/12/22 4:28, Steve Loughran wrote: > > >> Mukund and I have put together a release candidate (RC0) for Hadoop > 3.3.5. > > >> > > >> Given the time of year it's a bit unrealistic to run a 5 day vote and > > >> expect people to be able to test it
Re: [VOTE] Release Apache Hadoop 3.3.5
Hmm I'm looking at HADOOP-11867 related stuff but couldn't find it mentioned anywhere in change log or release notes. Are they actually up-to-date? On Mon, Jan 2, 2023 at 7:48 AM Masatake Iwasaki wrote: > > >- building HBase 2.4.13 and Hive 3.1.3 against 3.3.5 failed due to > > dependency change. > > For HBase, classes under com/sun/jersey/json/* and com/sun/xml/* are not > expected in hbase-shaded-with-hadoop-check-invariants. > Updating hbase-shaded/pom.xml is expected to be the fix as done in > HBASE-27292. > https://github.com/apache/hbase/commit/00612106b5fa78a0dd198cbcaab610bd8b1be277 > >[INFO] --- exec-maven-plugin:1.6.0:exec > (check-jar-contents-for-stuff-with-hadoop) @ > hbase-shaded-with-hadoop-check-invariants --- >[ERROR] Found artifact with unexpected contents: > '/home/rocky/srcs/bigtop/build/hbase/rpm/BUILD/hbase-2.4.13/hbase-shaded/hbase-shaded-client/target/hbase-shaded-client-2.4.13.jar' >Please check the following and either correct the build or update >the allowed list with reasoning. > >com/ >com/sun/ >com/sun/jersey/ >com/sun/jersey/json/ >... > > > For Hive, classes belonging to org.bouncycastle:bcprov-jdk15on:1.68 seem to > be problematic. > Excluding them on hive-jdbc might be the fix. > >[ERROR] Failed to execute goal > org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project > hive-jdbc: Error creating shaded jar: Problem shading JAR > /home/rocky/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.68/bcprov-jdk15on-1.68.jar > entry > META-INF/versions/15/org/bouncycastle/jcajce/provider/asymmetric/edec/SignatureSpi$EdDSA.class: > java.lang.IllegalArgumentException: Unsupported class file major version 59 > -> [Help 1] >... > > > On 2023/01/02 22:02, Masatake Iwasaki wrote: > > Thanks for your great effort for the new release, Steve and Mukund. > > > > +1 while it would be nice if we can address missed Javadocs. > > > > + verified the signature and checksum. > > + built from source tarball on Rocky Linux 8 and OpenJDK 8 with native > > profile enabled. > >+ launched pseudo distributed cluster including kms and httpfs with > > Kerberos and SSL enabled. > >+ created encryption zone, put and read files via httpfs. > >+ ran example MR wordcount over encryption zone. > > + built rpm packages by Bigtop and ran smoke-tests on Rocky Linux 8 (both > > x86_64 and aarch64). > >- building HBase 2.4.13 and Hive 3.1.3 against 3.3.5 failed due to > > dependency change. > > # while building HBase 2.4.13 and Hive 3.1.3 against Hadoop 3.3.4 > > worked. > > + skimmed the site contents. > >- Javadocs are not contained (under r3.3.5/api). > > # The issue can be reproduced even if I built site docs from the > > source. > > > > Masatake Iwasaki > > > > On 2022/12/22 4:28, Steve Loughran wrote: > >> Mukund and I have put together a release candidate (RC0) for Hadoop 3.3.5. > >> > >> Given the time of year it's a bit unrealistic to run a 5 day vote and > >> expect people to be able to test it thoroughly enough to make this the one > >> we can ship. > >> > >> What we would like is for anyone who can to verify the tarballs, and test > >> the binaries, especially anyone who can try the arm64 binaries. We've got > >> the building of those done and now the build file will incorporate them > >> into the release -but neither of us have actually tested it yet. Maybe I > >> should try it on my pi400 over xmas. > >> > >> The maven artifacts are up on the apache staging repo -they are the ones > >> from x86 build. Building and testing downstream apps will be incredibly > >> helpful. > >> > >> The RC is available at: > >> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC0/ > >> > >> The git tag is release-3.3.5-RC0, commit 3262495904d > >> > >> The maven artifacts are staged at > >> https://repository.apache.org/content/repositories/orgapachehadoop-1365/ > >> > >> You can find my public key at: > >> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > >> > >> Change log > >> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC0/CHANGELOG.md > >> > >> Release notes > >> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC0/RELEASENOTES.md > >> > >> This is off branch-3.3 and is the first big release since 3.3.2. > >> > >> Key changes include > >> > >> * Big update of dependencies to try and keep those reports of > >>transitive CVEs under control -both genuine and false positive. > >> * HDFS RBF enhancements > >> * Critical fix to ABFS input stream prefetching for correct reading. > >> * Vectored IO API for all FSDataInputStream implementations, with > >>high-performance versions for file:// and s3a:// filesystems. > >>file:// through java native io > >>s3a:// parallel GET requests. > >> * This release includes Arm64 binaries. Please can anyone with > >>compatible systems validate these. > >> > >> > >> Please try
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestTrash hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestRollingUpgrade hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.TestDFSInotifyEventInputStream hadoop.hdfs.TestFileLengthOnClusterRestart hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-mvnsite-root.txt [564K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [224K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [468K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/895/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [72K]