[jira] [Created] (HADOOP-16054) Upgrade Dockerfile to use Ubuntu bionic

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16054:
--

 Summary: Upgrade Dockerfile to use Ubuntu bionic
 Key: HADOOP-16054
 URL: https://issues.apache.org/jira/browse/HADOOP-16054
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Reporter: Akira Ajisaka


Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11636) Several tests are not stable (OpenJDK - Ubuntu - x86_64) V2.6.0

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-11636.

Resolution: Won't Fix

Hadoop 2.6.x is EoL. Closing.

> Several tests are not stable (OpenJDK - Ubuntu - x86_64) V2.6.0
> ---
>
> Key: HADOOP-11636
> URL: https://issues.apache.org/jira/browse/HADOOP-11636
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: OpenJDK 1.7 - Ubuntu - x86_64
>Reporter: Tony Reix
>Priority: Major
>
> I've run all the Hadoop 2.6.0 tests many times (16 for now).
> Using a tool, I can see that 30 tests are unstable.
> Unstable means that the result (Number of failures and errors) is not stable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14336) Cleanup findbugs warnings found by Spotbugs

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-14336.

  Resolution: Done
Target Version/s:   (was: 3.3.0)

All the sub-tasks were closed.

> Cleanup findbugs warnings found by Spotbugs
> ---
>
> Key: HADOOP-14336
> URL: https://issues.apache.org/jira/browse/HADOOP-14336
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> HADOOP-14316 switched from Findbugs to Spotbugs and there are now about 60 
> warnings. Let's fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16053) Backport HADOOP-14816 to branch-2

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16053:
--

 Summary: Backport HADOOP-14816 to branch-2
 Key: HADOOP-16053
 URL: https://issues.apache.org/jira/browse/HADOOP-16053
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Ubuntu Trusty becomes EoL in April 2019, let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16052) Remove Forrest from Dockerfile

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16052:
--

 Summary: Remove Forrest from Dockerfile
 Key: HADOOP-16052
 URL: https://issues.apache.org/jira/browse/HADOOP-16052
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


After HADOOP-14613, Apache Hadoop website is generated by hugo. Forrest can be 
removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16051) branch-2 build is failing by npm error

2019-01-17 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16051.

Resolution: Duplicate

Dup of HADOOP-15617. Closing.

> branch-2 build is failing by npm error
> --
>
> Key: HADOOP-16051
> URL: https://issues.apache.org/jira/browse/HADOOP-16051
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> All branch-2 builds are failing with docker problems right now
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console
> {noformat}
> npm ERR! TypeError: Cannot read property 'latest' of undefined
> npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
> npm ERR! at /usr/share/npm/lib/cache.js:675:5
> npm ERR! at saved 
> (/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
> npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
> npm ERR! at Object.oncomplete (fs.js:107:15)
> npm ERR! If you need help, you may report this log at:
> npm ERR! 
> npm ERR! or email it to:
> npm ERR! 
> npm ERR! System Linux 4.4.0-138-generic
> npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! cwd /root
> npm ERR! node -v v0.10.25
> npm ERR! npm -v 1.3.10
> npm ERR! type non_object_property_load
> {noformat}
> Reported by [~ste...@apache.org].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Jenkins builds of branch-2 failing with docker, npm

2019-01-17 Thread Akira Ajisaka
I found this issue was closed by HADOOP-15617.
Sorry for spamming.

2019年1月18日(金) 10:30 Akira Ajisaka :
>
> Filed https://issues.apache.org/jira/browse/HADOOP-16051 for tracking
> this issue.
> Thanks Steve for the report.
>
> -Akira
>
> 2019年1月16日(水) 21:22 Steve Loughran :
> >
> >
> > All branch-2 builds are failing with docker problems right now
> >
> > https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console
> >
> > no obvious clue to me as to what is wrong; that npm install is working for 
> > me locally

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Jenkins builds of branch-2 failing with docker, npm

2019-01-17 Thread Akira Ajisaka
Filed https://issues.apache.org/jira/browse/HADOOP-16051 for tracking
this issue.
Thanks Steve for the report.

-Akira

2019年1月16日(水) 21:22 Steve Loughran :
>
>
> All branch-2 builds are failing with docker problems right now
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console
>
> no obvious clue to me as to what is wrong; that npm install is working for me 
> locally

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16051) branch-2 build is failing by npm error

2019-01-17 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16051:
--

 Summary: branch-2 build is failing by npm error
 Key: HADOOP-16051
 URL: https://issues.apache.org/jira/browse/HADOOP-16051
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


All branch-2 builds are failing with docker problems right now
https://builds.apache.org/job/PreCommit-HADOOP-Build/15791/console

{noformat}
npm ERR! TypeError: Cannot read property 'latest' of undefined
npm ERR! at next (/usr/share/npm/lib/cache.js:687:35)
npm ERR! at /usr/share/npm/lib/cache.js:675:5
npm ERR! at saved 
(/usr/share/npm/node_modules/npm-registry-client/lib/get.js:142:7)
npm ERR! at /usr/lib/nodejs/graceful-fs/polyfills.js:133:7
npm ERR! at Object.oncomplete (fs.js:107:15)
npm ERR! If you need help, you may report this log at:
npm ERR! 
npm ERR! or email it to:
npm ERR! 

npm ERR! System Linux 4.4.0-138-generic
npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
npm ERR! cwd /root
npm ERR! node -v v0.10.25
npm ERR! npm -v 1.3.10
npm ERR! type non_object_property_load
{noformat}

Reported by [~ste...@apache.org].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] - HDDS-4 Branch merge

2019-01-17 Thread Arpit Agarwal
+1

It's great to see Ozone security being merged.


On 2019/01/11, 7:40 AM, "Anu Engineer"  wrote:

Since I have not heard any concerns, I will start a VOTE thread now.
This vote will run for 7 days and will end on Jan/18/2019 @ 8:00 AM PST.

I will start with my vote, +1 (Binding)

Thanks
Anu


-- Forwarded message -
From: Anu Engineer 
Date: Mon, Jan 7, 2019 at 5:10 PM
Subject: [Discuss] - HDDS-4 Branch merge
To: , 


Hi All,

I would like to propose a merge of HDDS-4 branch to the Hadoop trunk.
HDDS-4 branch implements the security work for HDDS and Ozone.

HDDS-4 branch contains the following features:
- Hadoop Kerberos and Tokens support
- A Certificate infrastructure used by Ozone and HDDS.
- Audit Logging and parsing support (Spread across trunk and HDDS-4)
- S3 Security Support - AWS Signature Support.
- Apache Ranger Support for Ozone

I will follow up with a formal vote later this week if I hear no
objections. AFAIK, the changes are isolated to HDDS/Ozone and should not
impact any other Hadoop project.

Thanks
Anu




Re: Mentorship opportunity for aspiring Hadoop developer

2019-01-17 Thread Aaron Fabbri
(list moved to bcc:)

Thank you for all the responses to my email. There is more interest than I
expected!

I've created a small survey to collect details from people interested in
being mentored.  Please fill it out if you are interested.
https://goo.gl/forms/oRfOUe1nYhrZNX3u2

I also want to set expectations.. based on the large response I won't be
able to work with everyone.  I'll do my best to keep you updated via direct
email.

I am also looking into Google Summer of Code, which may interest you as
well.

Thank you,
Aaron


[jira] [Created] (HADOOP-16050) Support setting cipher suites for s3a file system

2019-01-17 Thread Justin Uang (JIRA)
Justin Uang created HADOOP-16050:


 Summary: Support setting cipher suites for s3a file system
 Key: HADOOP-16050
 URL: https://issues.apache.org/jira/browse/HADOOP-16050
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.9.1
Reporter: Justin Uang
 Attachments: Screen Shot 2019-01-17 at 2.57.06 PM.png

We have found that when running the S3AFileSystem, it picks GCM as the ssl 
cipher suite. Unfortunately this is well known to be slow on java 8: 
[https://stackoverflow.com/questions/25992131/slow-aes-gcm-encryption-and-decryption-with-java-8u20.]

 

In practice we have seen that it can take well over 50% of our CPU time in 
spark workflows. We should add an option to set the list of cipher suites we 
would like to use. !Screen Shot 2019-01-17 at 2.57.06 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] - HDDS-4 Branch merge

2019-01-17 Thread Shashikant Banerjee
+1

Thanks,
Shashi

On 1/16/19, 10:25 AM, "Mukul Kumar Singh"  wrote:

+1

Thanks,
Mukul

On 1/13/19, 11:06 PM, "Xiaoyu Yao"  wrote:

+1 (binding), these will useful features for enterprise adoption of 
Ozone/Hdds.

Thanks,
Xiaoyu

On 1/12/19, 3:43 AM, "Lokesh Jain"  wrote:

+1 (non-binding)

Thanks
Lokesh

> On 12-Jan-2019, at 3:00 PM, Sandeep Nemuri  
wrote:
> 
> +1 (non-binding)
> 
> On Sat, Jan 12, 2019 at 8:49 AM Bharat Viswanadham <
> bviswanad...@hortonworks.com 
> wrote:
> 
>> +1 (binding)
>> 
>> 
>> Thanks,
>> Bharat
>> 
>> 
>> On 1/11/19, 11:04 AM, "Hanisha Koneru"  
wrote:
>> 
>>+1 (binding)
>> 
>>Thanks,
>>Hanisha
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>On 1/11/19, 7:40 AM, "Anu Engineer"  
wrote:
>> 
>>> Since I have not heard any concerns, I will start a VOTE thread 
now.
>>> This vote will run for 7 days and will end on Jan/18/2019 @ 
8:00 AM
>> PST.
>>> 
>>> I will start with my vote, +1 (Binding)
>>> 
>>> Thanks
>>> Anu
>>> 
>>> 
>>> -- Forwarded message -
>>> From: Anu Engineer 
>>> Date: Mon, Jan 7, 2019 at 5:10 PM
>>> Subject: [Discuss] - HDDS-4 Branch merge
>>> To: , 
>>> 
>>> 
>>> Hi All,
>>> 
>>> I would like to propose a merge of HDDS-4 branch to the Hadoop 
trunk.
>>> HDDS-4 branch implements the security work for HDDS and Ozone.
>>> 
>>> HDDS-4 branch contains the following features:
>>>   - Hadoop Kerberos and Tokens support
>>>   - A Certificate infrastructure used by Ozone and HDDS.
>>>   - Audit Logging and parsing support (Spread across trunk and
>> HDDS-4)
>>>   - S3 Security Support - AWS Signature Support.
>>>   - Apache Ranger Support for Ozone
>>> 
>>> I will follow up with a formal vote later this week if I hear no
>>> objections. AFAIK, the changes are isolated to HDDS/Ozone and 
should
>> not
>>> impact any other Hadoop project.
>>> 
>>> Thanks
>>> Anu
>> 
>>
-
>>To unsubscribe, e-mail: 
common-dev-unsubscr...@hadoop.apache.org
>>For additional commands, e-mail: 
common-dev-h...@hadoop.apache.org
>> 
>> 
>> 
> 
> -- 
> *  Regards*
> *  Sandeep Nemuri*




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-17 Thread Wangda Tan
Spent several more hours trying to figure out the issue, still no luck.

I just filed https://issues.sonatype.org/browse/OSSRH-45646, really
appreciate if anybody could add some suggestions.

Thanks,
Wangda

On Tue, Jan 15, 2019 at 9:48 AM Wangda Tan  wrote:

> It seems the problem still exists for me:
>
> Now the error message only contains:
>
> failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-client-check-invariants/3.1.2/hadoop-client-check-invariants-3.1.2.pom',
> check the logs.
> failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-resourceestimator/3.1.2/hadoop-resourceestimator-3.1.2-javadoc.jar',
> check the logs.
>
> If anybody has access the Nexus node, could you please help to check what
> is the failure message?
>
> Thanks,
> Wangda
>
>
> On Tue, Jan 15, 2019 at 9:56 AM Brian Fox  wrote:
>
>> Good to know. The pool has occasionally had sync issues, but we're
>> talking 3 times in the last 8-9 years.
>>
>> On Tue, Jan 15, 2019 at 10:39 AM Elek, Marton  wrote:
>>
>>> My key was pushed to the server with pgp about 1 year ago, and it worked
>>> well with the last Ratis release. So it should be synced between the key
>>> servers.
>>>
>>> But it seems that the INFRA solved the problem with shuffling the key
>>> server order (or it was an intermittent issue): see INFRA-17649
>>>
>>> Seems to be working now...
>>>
>>> Marton
>>>
>>>
>>> On 1/15/19 5:19 AM, Wangda Tan wrote:
>>> > HI Brain,
>>> > Thanks for responding, could u share how to push to keys to Apache pgp
>>> pool?
>>> >
>>> > Best,
>>> > Wangda
>>> >
>>> > On Mon, Jan 14, 2019 at 10:44 AM Brian Fox  wrote:
>>> >
>>> >> Did you push your key up to the pgp pool? That's what Nexus is
>>> validating
>>> >> against. It might take time to propagate if you just pushed it.
>>> >>
>>> >> On Mon, Jan 14, 2019 at 9:59 AM Elek, Marton  wrote:
>>> >>
>>> >>> Seems to be an INFRA issue for me:
>>> >>>
>>> >>> 1. I downloaded a sample jar file [1] + the signature from the
>>> >>> repository and it was ok, locally I verified it.
>>> >>>
>>> >>> 2. I tested it with an other Apache project (Ratis) and my key. I got
>>> >>> the same problem even if it worked at last year during the 0.3.0
>>> >>> release. (I used exactly the same command)
>>> >>>
>>> >>> I opened an infra ticket to check the logs of the Nexus as it was
>>> >>> suggested in the error message:
>>> >>>
>>> >>> https://issues.apache.org/jira/browse/INFRA-17649
>>> >>>
>>> >>> Marton
>>> >>>
>>> >>>
>>> >>> [1]:
>>> >>>
>>> >>>
>>> https://repository.apache.org/service/local/repositories/orgapachehadoop-1183/content/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-javadoc.jar
>>> >>>
>>> >>>
>>> >>> On 1/13/19 6:27 AM, Wangda Tan wrote:
>>>  Uploaded sample file and signature.
>>> 
>>> 
>>> 
>>>  On Sat, Jan 12, 2019 at 9:18 PM Wangda Tan >>  > wrote:
>>> 
>>>  Actually, among the hundreds of failed messages, the "No public
>>> key"
>>>  issues still occurred several times:
>>> 
>>>  failureMessage  No public key: Key with id:
>>> (b3fa653d57300d45)
>>>  was not able to be located on http://gpg-keyserver.de/.
>>> Upload
>>>  your public key and try the operation again.
>>>  failureMessage  No public key: Key with id:
>>> (b3fa653d57300d45)
>>>  was not able to be located on
>>>  http://pool.sks-keyservers.net:11371. Upload your public
>>> key
>>> >>> and
>>>  try the operation again.
>>>  failureMessage  No public key: Key with id:
>>> (b3fa653d57300d45)
>>>  was not able to be located on http://pgp.mit.edu:11371.
>>> Upload
>>>  your public key and try the operation again.
>>> 
>>>  Once the close operation returned, I will upload sample files
>>> which
>>>  may help troubleshoot the issue.
>>> 
>>>  Thanks,
>>> 
>>>  On Sat, Jan 12, 2019 at 9:04 PM Wangda Tan >>  > wrote:
>>> 
>>>  Thanks David for the quick response!
>>> 
>>>  I just retried, now the "No public key" issue is gone.
>>> However,
>>>  the issue:
>>> 
>>>  failureMessage  Failed to validate the pgp signature of
>>> 
>>> >>>
>>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
>>>  check the logs.
>>>  failureMessage  Failed to validate the pgp signature of
>>> 
>>> >>>
>>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
>>>  check the logs.
>>>  failureMessage  Failed to validate the pgp signature of
>>> 
>>> >>>
>>> 

[jira] [Created] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0

2019-01-17 Thread Kai Xie (JIRA)
Kai Xie created HADOOP-16049:


 Summary: DistCp result has data and checksum mismatch when blocks 
per chunk > 0
 Key: HADOOP-16049
 URL: https://issues.apache.org/jira/browse/HADOOP-16049
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.9.2
Reporter: Kai Xie


In 2.9.2 RetriableFileCopyCommand.copyBytes,

 
{code:java}
int bytesRead = readBytes(inStream, buf, sourceOffset);
while (bytesRead >= 0) {
  ...
  if (action == FileAction.APPEND) {
sourceOffset += bytesRead;
  }
  ... // write to dst
  bytesRead = readBytes(inStream, buf, sourceOffset);
}{code}
it does a positioned read but the position (`sourceOffset` here) is never 
updated when blocks per chunk is set to > 0 (which always disables append 
action). So for chunk with offset != 0, it will keep copying the first few 
bytes again and again, causing result to have data & checksum mismatch.

HADOOP-15292 has resolved this ticket by not using the positioned read, but has 
not been backported to branch-2 yet

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/

[Jan 16, 2019 5:25:37 AM] (vrushali) YARN-9150 Making TimelineSchemaCreator 
support different backends for
[Jan 16, 2019 5:28:10 AM] (vrushali) YARN-9150 Making TimelineSchemaCreator 
support different backends for
[Jan 16, 2019 5:36:36 AM] (aajisaka) YARN-8747. [UI2] YARN UI2 page loading 
failed due to js error under some
[Jan 16, 2019 6:14:22 PM] (inigoiri) HDFS-14192. Track missing DFS operations 
in Statistics and
[Jan 17, 2019 1:33:10 AM] (bharat) HDDS-898. Continue token should contain the 
previous dir in Ozone s3g
[Jan 17, 2019 1:43:30 AM] (bharat) HDDS-971. ContainerDataConstructor throws 
exception on QUASI_CLOSED and




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestConsistentReadsObserver 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [168K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1019/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [332K]