Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/246/ [Jan 19, 2022 6:10:39 AM] (noreply) HDFS-16399. Reconfig cache report parameters for datanode (#3841) [Jan 19, 2022 8:59:42 AM] (noreply) HDFS-16423. Balancer should not get blocks on stale storages (#3883) [Jan 19, 2022 10:13:13 AM] (noreply) HADOOP-18084. ABFS: Add testfilePath while verifying test contents are read correctly (#3903) [Jan 20, 2022 12:44:10 PM] (noreply) YARN-11065. Bump follow-redirects from 1.13.3 to 1.14.7 in hadoop-yarn-ui (#3890) -1 overall The following subsystems voted -1: blanks mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundan
Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2
+1 (binding) - Built from source - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3 NMs - Validated inter- and intra-queue preemption - Validated exclusive node labels Thanks a lot Chao for your diligence and hard work on this release. Eric On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun wrote: Hi all, I've put together Hadoop 3.3.2 RC2 below: The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/ The RC tag is at: https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2 The Maven artifacts are staged at: https://repository.apache.org/content/repositories/orgapachehadoop-1332 You can find my public key at: https://downloads.apache.org/hadoop/common/KEYS I've done the following tests and they look good: - Ran all the unit tests - Started a single node HDFS cluster and tested a few simple commands - Ran all the tests in Spark using the RC2 artifacts Please evaluate the RC and vote, thanks! Best, Chao - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/ [Jan 20, 2022 12:44:10 PM] (noreply) YARN-11065. Bump follow-redirects from 1.13.3 to 1.14.7 in hadoop-yarn-ui (#3890) -1 overall The following subsystems voted -1: blanks pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.yarn.csi.client.TestCsiClient cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-compile-javac-root.txt [340K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-checkstyle-root.txt [14M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-shellcheck.txt [28K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/results-javadoc-javadoc-root.txt [404K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/757/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt [20K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2
Hmm interesting. Let me check on this error. Thanks Mukund. Chao On Fri, Jan 21, 2022 at 4:42 AM Mukund Madhav Thakur wrote: > Checked out the release tag. commit *6da346a358c * > Seeing below error while compiling : > > Duplicate classes found: > > > Found in: > > org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile > > org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile > > Duplicate classes: > > org/apache/hadoop/io/serializer/avro/AvroRecord.class > > org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class > > org/apache/hadoop/io/serializer/avro/AvroRecord$1.class > > > [*INFO*] > ** > > [*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:* > > [*INFO*] > > [*INFO*] Apache Hadoop Client Test Minicluster .. *SUCCESS* > [02:17 min] > > [*INFO*] Apache Hadoop Client Packaging Invariants for Test . > *FAILURE* [ 0.221 > s] > > [*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED* > > [*INFO*] Apache Hadoop Distribution . *SKIPPED* > > [*INFO*] Apache Hadoop Client Modules ... *SKIPPED* > > [*INFO*] Apache Hadoop Tencent COS Support .. *SKIPPED* > > [*INFO*] Apache Hadoop Cloud Storage *SKIPPED* > > [*INFO*] Apache Hadoop Cloud Storage Project *SKIPPED* > > [*INFO*] > ** > > [*INFO*] *BUILD FAILURE* > > [*INFO*] > ** > > [*INFO*] Total time: 02:18 min > > [*INFO*] Finished at: 2022-01-21T18:06:11+05:30 > > [*INFO*] > ** > > [*ERROR*] Failed to execute goal > org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce > *(enforce-banned-dependencies)* on project > hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look > above for specific messages explaining why the rule failed.* -> *[Help 1]* > > > On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang > wrote: > > > I'll find time to check out the RC bits. > > I just feel bad that the tarball is now more than 600MB in size. > > > > On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran > > > > > wrote: > > > > > *+1 binding.* > > > > > > reviewed binaries, source, artifacts in the staging maven repository in > > > downstream builds. all good. > > > > > > *## test run* > > > > > > checked out the asf github repo at commit 6da346a358c into a location > > > already set up with aws and azure test credentials > > > > > > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6 > > > -Dmarkers=delete -Dscale > > > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs > > > -DtestsThreadCount=6 > > > > > > all happy > > > > > > > > > > > > *## binary* > > > downloaded KEYS and imported, so adding your key to my list (also > signed > > > this and updated the key servers) > > > > > > downloaded rc tar and verified > > > ``` > > > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz > > > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT > > > gpg:using RSA key > > DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98 > > > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) < > > sunc...@apache.org > > > >" > > > [full] > > > > > > > > > > cat hadoop-3.3.2.tar.gz.sha512 > > > SHA512 (hadoop-3.3.2.tar.gz) = > > > > > > > > > cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d > > > > > > > shasum -a 512 hadoop-3.3.2.tar.gz > > > > > > > > > cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d > > > hadoop-3.3.2.tar.gz > > > ``` > > > > > > > > > *# cloudstore against staged artifacts* > > > ``` > > > cd ~/.m2/repository/org/apache/hadoop > > > find . -name \*3.3.2\* -print | xargs rm -r > > > ``` > > > ensures no local builds have tainted the repo. > > > > > > in cloudstore mvn build without tests > > > ``` > > > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging > > > ``` > > > this fetches all from asf staging > > > > > > ``` > > > Downloading from ASF Staging: > > > > > > > > > https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom > > > Downloaded from ASF Staging: > > > > > > > > > https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom > > > (11 kB at 20 kB/s) > > > ``` > > > there's no tests there, but it did audit the download process. FWIW, > that > > > project has switched to logback, so I now have all hadoop imports > > excluding > > > slf4j and log4j. it takes too much effort right now. > > > > > > build works. > > > > > > tested abfs and s3a storediags, all h
Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2
Checked out the release tag. commit *6da346a358c * Seeing below error while compiling : Duplicate classes found: Found in: org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile Duplicate classes: org/apache/hadoop/io/serializer/avro/AvroRecord.class org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class org/apache/hadoop/io/serializer/avro/AvroRecord$1.class [*INFO*] ** [*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:* [*INFO*] [*INFO*] Apache Hadoop Client Test Minicluster .. *SUCCESS* [02:17 min] [*INFO*] Apache Hadoop Client Packaging Invariants for Test . *FAILURE* [ 0.221 s] [*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED* [*INFO*] Apache Hadoop Distribution . *SKIPPED* [*INFO*] Apache Hadoop Client Modules ... *SKIPPED* [*INFO*] Apache Hadoop Tencent COS Support .. *SKIPPED* [*INFO*] Apache Hadoop Cloud Storage *SKIPPED* [*INFO*] Apache Hadoop Cloud Storage Project *SKIPPED* [*INFO*] ** [*INFO*] *BUILD FAILURE* [*INFO*] ** [*INFO*] Total time: 02:18 min [*INFO*] Finished at: 2022-01-21T18:06:11+05:30 [*INFO*] ** [*ERROR*] Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce *(enforce-banned-dependencies)* on project hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed.* -> *[Help 1]* On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang wrote: > I'll find time to check out the RC bits. > I just feel bad that the tarball is now more than 600MB in size. > > On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran > > wrote: > > > *+1 binding.* > > > > reviewed binaries, source, artifacts in the staging maven repository in > > downstream builds. all good. > > > > *## test run* > > > > checked out the asf github repo at commit 6da346a358c into a location > > already set up with aws and azure test credentials > > > > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6 > > -Dmarkers=delete -Dscale > > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs > > -DtestsThreadCount=6 > > > > all happy > > > > > > > > *## binary* > > downloaded KEYS and imported, so adding your key to my list (also signed > > this and updated the key servers) > > > > downloaded rc tar and verified > > ``` > > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz > > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT > > gpg:using RSA key > DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98 > > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) < > sunc...@apache.org > > >" > > [full] > > > > > > > cat hadoop-3.3.2.tar.gz.sha512 > > SHA512 (hadoop-3.3.2.tar.gz) = > > > > > cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d > > > > > shasum -a 512 hadoop-3.3.2.tar.gz > > > > > cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d > > hadoop-3.3.2.tar.gz > > ``` > > > > > > *# cloudstore against staged artifacts* > > ``` > > cd ~/.m2/repository/org/apache/hadoop > > find . -name \*3.3.2\* -print | xargs rm -r > > ``` > > ensures no local builds have tainted the repo. > > > > in cloudstore mvn build without tests > > ``` > > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging > > ``` > > this fetches all from asf staging > > > > ``` > > Downloading from ASF Staging: > > > > > https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom > > Downloaded from ASF Staging: > > > > > https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom > > (11 kB at 20 kB/s) > > ``` > > there's no tests there, but it did audit the download process. FWIW, that > > project has switched to logback, so I now have all hadoop imports > excluding > > slf4j and log4j. it takes too much effort right now. > > > > build works. > > > > tested abfs and s3a storediags, all happy > > > > > > > > > > *### google GCS against staged artifacts* > > > > gcs is now java 11 only, so I had to switch JVMs here. > > > > had to add a snapshots and staging profile, after which I could build and > > test. > > > > ``` > > -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging > > ``` > > two test failures were related to auth failures where the tests were > trying > > to raise exceptions but things failed differen
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestTrash hadoop.io.compress.snappy.TestSnappyCompressorDecompressor hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/diff-compile-javac-root.txt [476K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-mvnsite-root.txt [560K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [224K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [424K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [112K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/549/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [28K] https://ci-hadoop.apache.org/job/hadoop-q
[jira] [Created] (HADOOP-18089) Test coverage for Async profiler servlets
Viraj Jasani created HADOOP-18089: - Summary: Test coverage for Async profiler servlets Key: HADOOP-18089 URL: https://issues.apache.org/jira/browse/HADOOP-18089 Project: Hadoop Common Issue Type: Test Reporter: Viraj Jasani Assignee: Viraj Jasani As discussed in HADOOP-18077, we should provide sufficient test coverage to discover any potential regression in async profiler servlets: ProfileServlet and ProfileOutputServlet. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org