Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/33/ [Oct 18, 2020 1:07:46 PM] (Hemanth Boyina) HADOOP-17144. Update Hadoop's lz4 to v1.9.2. Contributed by Hemanth Boyina. [Oct 19, 2020 1:47:49 AM] (noreply) HADOOP-17309. Javadoc warnings and errors are ignored in the precommit jobs. (#2391) [Oct 19, 2020 5:04:17 AM] (Ayush Saxena) HDFS-15629. Add seqno when warning slow mirror/disk in BlockReceiver. Contributed by Haibin Huang. [Oct 19, 2020 5:18:47 AM] (Ayush Saxena) HDFS-14383. Compute datanode load based on StoragePolicy. Contributed by Ayush Saxena. [Oct 19, 2020 5:24:18 AM] (noreply) HADOOP-17310. Touch command with -c option is broken. (#2393). Contributed by Ayush Saxena. [Oct 19, 2020 11:17:51 AM] (Szilard Nemeth) YARN-10460. Upgrading to JUnit 4.13 causes tests in TestNodeStatusUpdater to fail. Contributed by Peter Bacsko [Oct 19, 2020 12:48:46 PM] (noreply) HADOOP-17302. Upgrade to jQuery 3.5.1 in hadoop-sls. (#2379) [Oct 20, 2020 1:09:03 AM] (noreply) HADOOP-17298. Backslash in username causes build failure in the environment started by start-build-env.sh. (#2367) -1 overall The following subsystems voted -1: blanks findbugs mvnsite pathlen shadedclient unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml findbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] findbugs : module:hadoop-hdfs-project Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] findbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 343] Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 356] Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in
[jira] [Created] (YARN-10468) TestNodeStatusUpdater does not handle early failure in threads
Ahmed Hussein created YARN-10468: Summary: TestNodeStatusUpdater does not handle early failure in threads Key: YARN-10468 URL: https://issues.apache.org/jira/browse/YARN-10468 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Reporter: Ahmed Hussein While investigating HADOOP-17314, I found that the * TestNodeStatusUpdater#testNMRegistration() will continue running {{while (heartBeatID <= 3 && waitCount++ != 200) {}} even though the nm thread could already be dead. the unit should detect that the nm has died and terminates sooner to release resources for other tests. * TestNodeStatusUpdater#testNMRMConnectionConf(). Same problem as described above. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/ [Oct 18, 2020 1:07:46 PM] (Hemanth Boyina) HADOOP-17144. Update Hadoop's lz4 to v1.9.2. Contributed by Hemanth Boyina. [Oct 19, 2020 1:47:49 AM] (noreply) HADOOP-17309. Javadoc warnings and errors are ignored in the precommit jobs. (#2391) [Oct 19, 2020 5:04:17 AM] (Ayush Saxena) HDFS-15629. Add seqno when warning slow mirror/disk in BlockReceiver. Contributed by Haibin Huang. [Oct 19, 2020 5:18:47 AM] (Ayush Saxena) HDFS-14383. Compute datanode load based on StoragePolicy. Contributed by Ayush Saxena. [Oct 19, 2020 5:24:18 AM] (noreply) HADOOP-17310. Touch command with -c option is broken. (#2393). Contributed by Ayush Saxena. [Oct 19, 2020 11:17:51 AM] (Szilard Nemeth) YARN-10460. Upgrading to JUnit 4.13 causes tests in TestNodeStatusUpdater to fail. Contributed by Peter Bacsko [Oct 19, 2020 12:48:46 PM] (noreply) HADOOP-17302. Upgrade to jQuery 3.5.1 in hadoop-sls. (#2379) [Oct 20, 2020 1:09:03 AM] (noreply) HADOOP-17298. Backslash in username causes build failure in the environment started by start-build-env.sh. (#2367) -1 overall The following subsystems voted -1: pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.TestFileChecksum hadoop.hdfs.TestFileChecksumCompositeCrc hadoop.hdfs.server.datanode.TestBPOfferService hadoop.hdfs.server.namenode.ha.TestHAAppend hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped hadoop.hdfs.server.namenode.ha.TestObserverNode hadoop.hdfs.TestDFSOutputStream hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapreduce.TestJobResourceUploader hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked hadoop.fs.azure.TestNativeAzureFileSystemMocked hadoop.fs.azure.TestBlobMetadata hadoop.fs.azure.TestNativeAzureFileSystemConcurrency hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck hadoop.fs.azure.TestNativeAzureFileSystemContractMocked hadoop.fs.azure.TestWasbFsck hadoop.fs.azure.TestOutOfBandAzureBlobOperations hadoop.yarn.sls.appmaster.TestAMSimulator cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/diff-compile-cc-root.txt [48K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/diff-compile-javac-root.txt [568K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/diff-checkstyle-root.txt [16M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/diff-patch-pylint.txt [60K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/whitespace-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/whitespace-tabs.txt [2.0M] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/300/artifact/out/xml.txt [24K] javadoc:
Re: Wire compatibility between Hadoop 3.x client and 2.x server
Steve, yes that is my understanding as well, although from my experience ppl usually prefer RPC in prod for better performance. On Tue, Oct 20, 2020 at 8:51 AM Steve Loughran wrote: > I have this belief that webhdfs:// is better for cross-version > compatibility. That right? > > On Fri, 16 Oct 2020 at 19:59, Chao Sun wrote: > >> Thanks for the replies all! >> >> > But in the opposite case, it might have problem, because 2.x server may >> not understand new client calls added in 3.x >> >> Yes not expecting this to work. I'm thinking about the case where one >> upgrades existing 2.x clients to 3.x and expects it to still work against >> 2.x server, which should not involve those new APIs. >> >> > Another backcompat issue was HDFS-15191, in 3.2.1. >> >> Thanks for pointing this out! In fact this looks like a serious bug in >> 3.2.1. Glad to see it is fixed in 3.2.2. >> >> > But aside from that, we've been using 3x client libraries against both >> 2x >> and 3x clusters without issue. >> >> Great! thanks. >> >> IMO it will be great if the community maintains official compatibility doc >> w.r.t 2.x/3.x, which can help the migration easier. >> >> Chao >> >> On Tue, Oct 13, 2020 at 11:57 PM Wu,Jianliang(vip.com) < >> jianliang...@vipshop.com> wrote: >> >> > Ok, I will file a HDFS jira to report this issue. >> > >> > > 2020年10月13日 20:43,Wei-Chiu Chuang 写道: >> > > >> > > Thanks Jialiang for reporting the issue. >> > > That sounds bad and should've not happened. Could you file a HDFS jira >> > and >> > > fill in more details? >> > > >> > > On Mon, Oct 12, 2020 at 8:59 PM Wu,Jianliang(vip.com) < >> > > jianliang...@vipshop.com> wrote: >> > > >> > >> In our case, when nn has upgraded to 3.1.3 and dn’s version was still >> > >> 2.6, we found hive to call getContentSummary method , the client and >> > >> server was not compatible because of hadoop3 added new PROVIDED >> storage >> > >> type. >> > >> >> > >> 2020年10月13日 06:41,Chao Sun > > sunc...@apache.org>> >> > >> 写道: >> > >> >> > >> >> > >> >> > >> >> > >> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! >> > >> This communication is intended only for the addressee(s) and may >> contain >> > >> information that is privileged and confidential. You are hereby >> notified >> > >> that, if you are not an intended recipient listed above, or an >> > authorized >> > >> employee or agent of an addressee of this communication responsible >> for >> > >> delivering e-mail messages to an intended recipient, any >> dissemination, >> > >> distribution or reproduction of this communication (including any >> > >> attachments hereto) is strictly prohibited. If you have received this >> > >> communication in error, please notify us immediately by a reply >> e-mail >> > >> addressed to the sender and permanently delete the original e-mail >> > >> communication and any attachments from all storage devices without >> > making >> > >> or otherwise retaining a copy. >> > >> >> > >> > >> 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! >> > This communication is intended only for the addressee(s) and may contain >> > information that is privileged and confidential. You are hereby notified >> > that, if you are not an intended recipient listed above, or an >> authorized >> > employee or agent of an addressee of this communication responsible for >> > delivering e-mail messages to an intended recipient, any dissemination, >> > distribution or reproduction of this communication (including any >> > attachments hereto) is strictly prohibited. If you have received this >> > communication in error, please notify us immediately by a reply e-mail >> > addressed to the sender and permanently delete the original e-mail >> > communication and any attachments from all storage devices without >> making >> > or otherwise retaining a copy. >> > >> >
Re: Wire compatibility between Hadoop 3.x client and 2.x server
I have this belief that webhdfs:// is better for cross-version compatibility. That right? On Fri, 16 Oct 2020 at 19:59, Chao Sun wrote: > Thanks for the replies all! > > > But in the opposite case, it might have problem, because 2.x server may > not understand new client calls added in 3.x > > Yes not expecting this to work. I'm thinking about the case where one > upgrades existing 2.x clients to 3.x and expects it to still work against > 2.x server, which should not involve those new APIs. > > > Another backcompat issue was HDFS-15191, in 3.2.1. > > Thanks for pointing this out! In fact this looks like a serious bug in > 3.2.1. Glad to see it is fixed in 3.2.2. > > > But aside from that, we've been using 3x client libraries against both 2x > and 3x clusters without issue. > > Great! thanks. > > IMO it will be great if the community maintains official compatibility doc > w.r.t 2.x/3.x, which can help the migration easier. > > Chao > > On Tue, Oct 13, 2020 at 11:57 PM Wu,Jianliang(vip.com) < > jianliang...@vipshop.com> wrote: > > > Ok, I will file a HDFS jira to report this issue. > > > > > 2020年10月13日 20:43,Wei-Chiu Chuang 写道: > > > > > > Thanks Jialiang for reporting the issue. > > > That sounds bad and should've not happened. Could you file a HDFS jira > > and > > > fill in more details? > > > > > > On Mon, Oct 12, 2020 at 8:59 PM Wu,Jianliang(vip.com) < > > > jianliang...@vipshop.com> wrote: > > > > > >> In our case, when nn has upgraded to 3.1.3 and dn’s version was still > > >> 2.6, we found hive to call getContentSummary method , the client and > > >> server was not compatible because of hadoop3 added new PROVIDED > storage > > >> type. > > >> > > >> 2020年10月13日 06:41,Chao Sun > sunc...@apache.org>> > > >> 写道: > > >> > > >> > > >> > > >> > > > 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! > > >> This communication is intended only for the addressee(s) and may > contain > > >> information that is privileged and confidential. You are hereby > notified > > >> that, if you are not an intended recipient listed above, or an > > authorized > > >> employee or agent of an addressee of this communication responsible > for > > >> delivering e-mail messages to an intended recipient, any > dissemination, > > >> distribution or reproduction of this communication (including any > > >> attachments hereto) is strictly prohibited. If you have received this > > >> communication in error, please notify us immediately by a reply e-mail > > >> addressed to the sender and permanently delete the original e-mail > > >> communication and any attachments from all storage devices without > > making > > >> or otherwise retaining a copy. > > >> > > > > > 本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作! > > This communication is intended only for the addressee(s) and may contain > > information that is privileged and confidential. You are hereby notified > > that, if you are not an intended recipient listed above, or an authorized > > employee or agent of an addressee of this communication responsible for > > delivering e-mail messages to an intended recipient, any dissemination, > > distribution or reproduction of this communication (including any > > attachments hereto) is strictly prohibited. If you have received this > > communication in error, please notify us immediately by a reply e-mail > > addressed to the sender and permanently delete the original e-mail > > communication and any attachments from all storage devices without making > > or otherwise retaining a copy. > > >
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/ [Oct 19, 2020 2:04:28 AM] (Akira Ajisaka) HADOOP-17309. Javadoc warnings and errors are ignored in the precommit jobs. (#2391) -1 overall The following subsystems voted -1: asflicense hadolint jshint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-tools/hadoop-azure/src/config/checkstyle.xml hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.TestRollingUpgrade hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver jshint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-patch-jshint.txt [208K] cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-compile-javac-root.txt [456K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-patch-pylint.txt [60K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/whitespace-tabs.txt [1.3M] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/xml.txt [4.0K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/diff-javadoc-javadoc-root.txt [20K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [108K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/91/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt