Re: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-07 Thread Hanisha Koneru
Thanks Ajay for putting up the RC.

+1 non-binding.
Verified the following:
- Built from source 
- Deployed binary to a 3 node docker cluster and ran basic sanity checks
- Ran smoke tests

Thanks
Hanisha

> On Apr 29, 2019, at 9:04 PM, Ajay Kumar  wrote:
> 
> Hi All,
> 
> 
> 
> We have created the third release candidate (RC2) for Apache Hadoop Ozone 
> 0.4.0-alpha.
> 
> 
> 
> This release contains security payload for Ozone. Below are some important 
> features in it:
> 
> 
> 
>  *   Hadoop Delegation Tokens and Block Tokens supported for Ozone.
>  *   Transparent Data Encryption (TDE) Support - Allows data blocks to be 
> encrypted-at-rest.
>  *   Kerberos support for Ozone.
>  *   Certificate Infrastructure for Ozone  - Tokens use PKI instead of shared 
> secrets.
>  *   Datanode to Datanode communication secured via mutual TLS.
>  *   Ability secure ozone cluster that works with Yarn, Hive, and Spark.
>  *   Skaffold support to deploy Ozone clusters on K8s.
>  *   Support S3 Authentication Mechanisms like - S3 v4 Authentication 
> protocol.
>  *   S3 Gateway supports Multipart upload.
>  *   S3A file system is tested and supported.
>  *   Support for Tracing and Profiling for all Ozone components.
>  *   Audit Support - including Audit Parser tools.
>  *   Apache Ranger Support in Ozone.
>  *   Extensive failure testing for Ozone.
> 
> The RC artifacts are available at 
> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/
> 
> 
> 
> The RC tag in git is ozone-0.4.0-alpha-RC2 (git hash 
> 4ea602c1ee7b5e1a5560c6cbd096de4b140f776b)
> 
> 
> 
> Please try 
> out,
>  vote, or just give us feedback.
> 
> 
> 
> The vote will run for 5 days, ending on May 4, 2019, 04:00 UTC.
> 
> 
> 
> Thank you very much,
> 
> Ajay


-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Re: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-07 Thread Ajay Kumar
Thanks for trying out ozone-0.4.0 RC2 release artifacts and for your votes. 
(Re-sending this mail to make minor correction in list of binding +1.)

The vote is PASSED with the following details:
*  3 binding +1, (thanks Arpit, Xiaoyu & Anu) 
*  4 non-binding +1, (thanks Eric, Dinesh, Marton) together with my closing +1 
[*]
*  no -1/0

Thanks again, will publish the release now.

Note: ** We have addressed the issue highlighted by Xiaoyu, renaming 
"hadoop-3.3.0-SNAPSHOT-src-with-hdds" to " hadoop-ozone-0.4.0-alpha-src".
Git hash still remains same. MD5 checksum has been updated for renamed file. 
Release documentation for ozone has been updated to handle this in future 
releases.

Ajay

On 5/7/19, 9:13 AM, "Ajay Kumar"  wrote:

Thanks for trying out ozone-0.4.0 RC2 release artifacts and for your votes.

The vote is PASSED with the following details:
*  3 binding +1, (thanks Arpit, Xiaoyu & Jitendra)
*  4 non-binding +1, (thanks Eric, Dinesh, Marton) together with my closing 
+1 [*]
*  no -1/0

Thanks again, will publish the release now.

Note: ** We have addressed the issue highlighted by Xiaoyu, renaming 
"hadoop-3.3.0-SNAPSHOT-src-with-hdds" to " hadoop-ozone-0.4.0-alpha-src". 
Git hash still remains same. MD5 checksum has been updated for renamed 
file. Release documentation for ozone has been updated to handle this in future 
releases.

Ajay

On 5/5/19, 8:05 PM, "Xiaoyu Yao"  wrote:

+1 Binding. Thanks all who contributed to the release. 

+ Download sources and verify signature.
+ Build from source and ran docker-based ad-hot security tests.
++ From 1 datanode scale to 3 datanodes, verify certificates were 
correctly issued when security enabled
++ Smoke test for both non-secure and secure mode.
++ Put/Get/Delete/Rename Key with 
+++ Kerberos testing
+++ Delegation token testing with DTUtil CLI and MR jobs.
+++ S3 token.

Just have one minor question for the expanded source code which points 
to hadoop-3.3.0-SNAPSHOT-src-with-hdds/hadoop-ozone. But in 
hadoop-ozone/pom.xml, we explicitly declare dependency on Hadoop 3.2.0.
I understand we just take the trunk source code(3.3.0-SNAPSHOT up to 
the ozone-0.4 RC) here, should we fix this by giving the git hash of the trunk 
or clarify it to avoid confusion? 
This might be done by just updating the name of the binaries without 
reset the release itself. 

-Xiaoyu
 

On 5/3/19, 4:07 PM, "Dinesh Chitlangia"  
wrote:

+1 (non-binding)

- Built from sources and ran smoke test
- Verified all checksums
- Toggled audit log and verified audit parser tool

Thanks Ajay for organizing the release.

Cheers,
Dinesh



On 5/3/19, 5:42 PM, "Eric Yang"  wrote:

+1

On 4/29/19, 9:05 PM, "Ajay Kumar"  
wrote:

Hi All,



We have created the third release candidate (RC2) for 
Apache Hadoop Ozone 0.4.0-alpha.



This release contains security payload for Ozone. Below are 
some important features in it:



  *   Hadoop Delegation Tokens and Block Tokens supported 
for Ozone.
  *   Transparent Data Encryption (TDE) Support - Allows 
data blocks to be encrypted-at-rest.
  *   Kerberos support for Ozone.
  *   Certificate Infrastructure for Ozone  - Tokens use 
PKI instead of shared secrets.
  *   Datanode to Datanode communication secured via mutual 
TLS.
  *   Ability secure ozone cluster that works with Yarn, 
Hive, and Spark.
  *   Skaffold support to deploy Ozone clusters on K8s.
  *   Support S3 Authentication Mechanisms like - S3 v4 
Authentication protocol.
  *   S3 Gateway supports Multipart upload.
  *   S3A file system is tested and supported.
  *   Support for Tracing and Profiling for all Ozone 
components.
  *   Audit Support - including Audit Parser tools.
  *   Apache Ranger Support in Ozone.
  *   Extensive failure testing for Ozone.

The RC artifacts are available at 
https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/



The RC tag in git is ozone-0.4.0-alp

FW: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-07 Thread Ajay Kumar
Thanks for trying out ozone-0.4.0 RC2 release artifacts and for your votes.

The vote is PASSED with the following details:
*  3 binding +1, (thanks Arpit, Xiaoyu & Jitendra)
*  4 non-binding +1, (thanks Eric, Dinesh, Marton) together with my closing +1 
[*]
*  no -1/0

Thanks again, will publish the release now.

Note: ** We have addressed the issue highlighted by Xiaoyu, renaming 
"hadoop-3.3.0-SNAPSHOT-src-with-hdds" to " hadoop-ozone-0.4.0-alpha-src". 
Git hash still remains same. MD5 checksum has been updated for renamed file. 
Release documentation for ozone has been updated to handle this in future 
releases.

Ajay

On 5/5/19, 8:05 PM, "Xiaoyu Yao"  wrote:

+1 Binding. Thanks all who contributed to the release. 

+ Download sources and verify signature.
+ Build from source and ran docker-based ad-hot security tests.
++ From 1 datanode scale to 3 datanodes, verify certificates were correctly 
issued when security enabled
++ Smoke test for both non-secure and secure mode.
++ Put/Get/Delete/Rename Key with 
+++ Kerberos testing
+++ Delegation token testing with DTUtil CLI and MR jobs.
+++ S3 token.

Just have one minor question for the expanded source code which points to 
hadoop-3.3.0-SNAPSHOT-src-with-hdds/hadoop-ozone. But in hadoop-ozone/pom.xml, 
we explicitly declare dependency on Hadoop 3.2.0.
I understand we just take the trunk source code(3.3.0-SNAPSHOT up to the 
ozone-0.4 RC) here, should we fix this by giving the git hash of the trunk or 
clarify it to avoid confusion? 
This might be done by just updating the name of the binaries without reset 
the release itself. 

-Xiaoyu
 

On 5/3/19, 4:07 PM, "Dinesh Chitlangia"  
wrote:

+1 (non-binding)

- Built from sources and ran smoke test
- Verified all checksums
- Toggled audit log and verified audit parser tool

Thanks Ajay for organizing the release.

Cheers,
Dinesh



On 5/3/19, 5:42 PM, "Eric Yang"  wrote:

+1

On 4/29/19, 9:05 PM, "Ajay Kumar"  
wrote:

Hi All,



We have created the third release candidate (RC2) for Apache 
Hadoop Ozone 0.4.0-alpha.



This release contains security payload for Ozone. Below are 
some important features in it:



  *   Hadoop Delegation Tokens and Block Tokens supported for 
Ozone.
  *   Transparent Data Encryption (TDE) Support - Allows data 
blocks to be encrypted-at-rest.
  *   Kerberos support for Ozone.
  *   Certificate Infrastructure for Ozone  - Tokens use PKI 
instead of shared secrets.
  *   Datanode to Datanode communication secured via mutual TLS.
  *   Ability secure ozone cluster that works with Yarn, Hive, 
and Spark.
  *   Skaffold support to deploy Ozone clusters on K8s.
  *   Support S3 Authentication Mechanisms like - S3 v4 
Authentication protocol.
  *   S3 Gateway supports Multipart upload.
  *   S3A file system is tested and supported.
  *   Support for Tracing and Profiling for all Ozone 
components.
  *   Audit Support - including Audit Parser tools.
  *   Apache Ranger Support in Ozone.
  *   Extensive failure testing for Ozone.

The RC artifacts are available at 
https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/



The RC tag in git is ozone-0.4.0-alpha-RC2 (git hash 
4ea602c1ee7b5e1a5560c6cbd096de4b140f776b)



Please try 
out,
 vote, or just give us feedback.



The vote will run for 5 days, ending on May 4, 2019, 04:00 UTC.



Thank you very much,

Ajay




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-07 Thread Anu Engineer
+1 (Binding)

-- Built from sources.
-- Ran smoke tests and verified them.

--Anu


On Sun, May 5, 2019 at 8:05 PM Xiaoyu Yao  wrote:

> +1 Binding. Thanks all who contributed to the release.
>
> + Download sources and verify signature.
> + Build from source and ran docker-based ad-hot security tests.
> ++ From 1 datanode scale to 3 datanodes, verify certificates were
> correctly issued when security enabled
> ++ Smoke test for both non-secure and secure mode.
> ++ Put/Get/Delete/Rename Key with
> +++ Kerberos testing
> +++ Delegation token testing with DTUtil CLI and MR jobs.
> +++ S3 token.
>
> Just have one minor question for the expanded source code which points to
> hadoop-3.3.0-SNAPSHOT-src-with-hdds/hadoop-ozone. But in
> hadoop-ozone/pom.xml, we explicitly declare dependency on Hadoop 3.2.0.
> I understand we just take the trunk source code(3.3.0-SNAPSHOT up to the
> ozone-0.4 RC) here, should we fix this by giving the git hash of the trunk
> or clarify it to avoid confusion?
> This might be done by just updating the name of the binaries without reset
> the release itself.
>
> -Xiaoyu
>
>
> On 5/3/19, 4:07 PM, "Dinesh Chitlangia" 
> wrote:
>
> +1 (non-binding)
>
> - Built from sources and ran smoke test
> - Verified all checksums
> - Toggled audit log and verified audit parser tool
>
> Thanks Ajay for organizing the release.
>
> Cheers,
> Dinesh
>
>
>
> On 5/3/19, 5:42 PM, "Eric Yang"  wrote:
>
> +1
>
> On 4/29/19, 9:05 PM, "Ajay Kumar" 
> wrote:
>
> Hi All,
>
>
>
> We have created the third release candidate (RC2) for Apache
> Hadoop Ozone 0.4.0-alpha.
>
>
>
> This release contains security payload for Ozone. Below are
> some important features in it:
>
>
>
>   *   Hadoop Delegation Tokens and Block Tokens supported for
> Ozone.
>   *   Transparent Data Encryption (TDE) Support - Allows data
> blocks to be encrypted-at-rest.
>   *   Kerberos support for Ozone.
>   *   Certificate Infrastructure for Ozone  - Tokens use PKI
> instead of shared secrets.
>   *   Datanode to Datanode communication secured via mutual
> TLS.
>   *   Ability secure ozone cluster that works with Yarn, Hive,
> and Spark.
>   *   Skaffold support to deploy Ozone clusters on K8s.
>   *   Support S3 Authentication Mechanisms like - S3 v4
> Authentication protocol.
>   *   S3 Gateway supports Multipart upload.
>   *   S3A file system is tested and supported.
>   *   Support for Tracing and Profiling for all Ozone
> components.
>   *   Audit Support - including Audit Parser tools.
>   *   Apache Ranger Support in Ozone.
>   *   Extensive failure testing for Ozone.
>
> The RC artifacts are available at
> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/
>
>
>
> The RC tag in git is ozone-0.4.0-alpha-RC2 (git hash
> 4ea602c1ee7b5e1a5560c6cbd096de4b140f776b)
>
>
>
> Please try out<
> https://cwiki.apache.org/confluence/display/HADOOP/Running+via+Apache+Release>,
> vote, or just give us feedback.
>
>
>
> The vote will run for 5 days, ending on May 4, 2019, 04:00 UTC.
>
>
>
> Thank you very much,
>
> Ajay
>
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-05-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/

[May 6, 2019 12:00:15 PM] (wwei) YARN-9440. Improve diagnostics for scheduler 
and app activities.
[May 6, 2019 6:17:00 PM] (elek) Revert "HDDS-1384. 
TestBlockOutputStreamWithFailures is failing"
[May 6, 2019 6:56:22 PM] (haibochen) YARN-9529. Log correct cpu controller path 
on error while initializing
[May 6, 2019 10:47:33 PM] (weichiu) HADOOP-16289. Allow extra jsvc startup 
option in
[May 6, 2019 11:48:45 PM] (eyang) YARN-9524.  Fixed TestAHSWebService and 
TestLogsCLI unit tests. 




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 
   hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException 
   hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus 
   hadoop.fs.ozone.contract.ITestOzoneContractRename 
   hadoop.fs.ozone.contract.ITestOzoneContractRootDir 
   hadoop.fs.ozone.contract.ITestOzoneContractMkdir 
   hadoop.fs.ozone.contract.ITestOzoneContractSeek 
   hadoop.fs.ozone.contract.ITestOzoneContractOpen 
   hadoop.fs.ozone.contract.ITestOzoneContractDelete 
   hadoop.fs.ozone.contract.ITestOzoneContractCreate 
   hadoop.ozone.freon.TestFreonWithDatanodeFastRestart 
   hadoop.ozone.freon.TestRandomKeyGenerator 
   hadoop.ozone.freon.TestFreonWithPipelineDestroy 
   hadoop.ozone.freon.TestFreonWithDatanodeRestart 
   hadoop.ozone.freon.TestDataValidateWithUnsafeByteOperations 
   hadoop.ozone.fsck.TestContainerMapper 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-patch-pylint.txt
  [84K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1129/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-05-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/

[May 6, 2019 7:01:26 PM] (haibochen) YARN-9529. Log correct cpu controller path 
on error while initializing




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.tools.TestDistCpSystem 
   hadoop.tools.TestFileBasedCopyListing 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/314/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-br