[jira] [Created] (HADOOP-11486) org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testHttpUGI fails.

2015-01-16 Thread Spandan Dutta (JIRA)
Spandan Dutta created HADOOP-11486:
--

 Summary: 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testHttpUGI
 fails.
 Key: HADOOP-11486
 URL: https://issues.apache.org/jira/browse/HADOOP-11486
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Spandan Dutta


The jenkins job which was setup gave the following stack trace.

java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at 
java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at java.net.ServerSocket.(ServerSocket.java:237)
at 
org.mortbay.jetty.bio.SocketConnector.newServerSocket(SocketConnector.java:80)
at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at 
org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
at 
org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.Server.doStart(Server.java:235)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testHttpUGI(TestWebDelegationToken.java:934)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: InterfaceStability and InterfaceAudience stability

2015-01-16 Thread Karthik Kambatla
Change from Private to Contract might not be an issue. Change from Public
to Contract might be incompatible behavior irrespective of whether
InterfaceAudience is evolving or stable.

On Sat, Jan 17, 2015 at 12:01 AM, Allen Wittenauer  wrote:

>
> It may work fine from a code perspective, but from a semantic
> and/or human perspective, I think it’d be confusing and could lead to
> problems down the road.  Let’s say we add Contract after making this
> Stable.  If we change any Private’s to Contracts as a result, is that a
> break?
>
>
> On Jan 15, 2015, at 11:27 AM, Chris Nauroth 
> wrote:
>
> > Would it really be backwards-incompatible if we added new levels later?
> > That seems counter-intuitive and contrary to this piece of documentation:
> >
> > http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html#jls-13.5.7
> >
> > Quoting:
> >
> > Annotation types behave exactly like any other interface. Adding or
> > removing an element from an annotation type is analogous to adding or
> > removing a method. There are important considerations governing other
> > changes to annotation types, but these have no effect on the linkage of
> > binaries by the Java Virtual Machine. Rather, such changes affect the
> > behavior of reflective APIs that manipulate annotations. The
> documentation
> > of these APIs specifies their behavior when various changes are made to
> the
> > underlying annotation types.
> >
> > Adding or removing annotations has no effect on the correct linkage of
> the
> > binary representations of programs in the Java programming language.
> >
> > Certainly removing existing levels would be backwards-incompatible.
> >
> > Chris Nauroth
> > Hortonworks
> > http://hortonworks.com/
> >
> >
> > On Thu, Jan 15, 2015 at 6:14 AM, Allen Wittenauer 
> wrote:
> >
> >>
> >>IIRC, it was marked as evolving because it wasn’t clear at the
> >> time whether we would need to add more stability levels. (One of the key
> >> inspirations for the stability levels—Sun’s ARC process—had more.)
> >>
> >>So I think it’s important to remember that if this gets changed
> to
> >> stable, that effectively means it  new levels can’t really get added...
> >>
> >> On Jan 13, 2015, at 2:34 PM, Robert Kanter 
> wrote:
> >>
> >>> +1
> >>>
> >>> Though it is kinda funny that the InterfaceStability annotation was
> >> marked
> >>> as Evolving :)
> >>> @InterfaceStability.Evolving
> >>> public class InterfaceStability {...}
> >>>
> >>>
> >>> On Tue, Jan 13, 2015 at 2:21 PM, Ted Yu  wrote:
> >>>
>  +1
> 
>  On Tue, Jan 13, 2015 at 1:47 PM, Abraham Elmahrek 
>  wrote:
> 
> > Hey guys,
> >
> > I've noticed the InterfaceStability (
> >
> >
> 
> >>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java
> > )
> > and InterfaceAudience (
> >
> >
> 
> >>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java
> > )
> > classes are marked as "Evolving". These really haven't changed much
> in
>  the
> > last few years, so I was wondering if it is reasonable to mark them
> as
> > stable?
> >
> > -Abe
> >
> 
> >>
> >>
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
>
>


-- 
Karthik Kambatla
Software Engineer, Cloudera Inc.

http://five.sentenc.es


Re: InterfaceStability and InterfaceAudience stability

2015-01-16 Thread Allen Wittenauer

It may work fine from a code perspective, but from a semantic and/or 
human perspective, I think it’d be confusing and could lead to problems down 
the road.  Let’s say we add Contract after making this Stable.  If we change 
any Private’s to Contracts as a result, is that a break?

  
On Jan 15, 2015, at 11:27 AM, Chris Nauroth  wrote:

> Would it really be backwards-incompatible if we added new levels later?
> That seems counter-intuitive and contrary to this piece of documentation:
> 
> http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html#jls-13.5.7
> 
> Quoting:
> 
> Annotation types behave exactly like any other interface. Adding or
> removing an element from an annotation type is analogous to adding or
> removing a method. There are important considerations governing other
> changes to annotation types, but these have no effect on the linkage of
> binaries by the Java Virtual Machine. Rather, such changes affect the
> behavior of reflective APIs that manipulate annotations. The documentation
> of these APIs specifies their behavior when various changes are made to the
> underlying annotation types.
> 
> Adding or removing annotations has no effect on the correct linkage of the
> binary representations of programs in the Java programming language.
> 
> Certainly removing existing levels would be backwards-incompatible.
> 
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
> 
> 
> On Thu, Jan 15, 2015 at 6:14 AM, Allen Wittenauer  wrote:
> 
>> 
>>IIRC, it was marked as evolving because it wasn’t clear at the
>> time whether we would need to add more stability levels. (One of the key
>> inspirations for the stability levels—Sun’s ARC process—had more.)
>> 
>>So I think it’s important to remember that if this gets changed to
>> stable, that effectively means it  new levels can’t really get added...
>> 
>> On Jan 13, 2015, at 2:34 PM, Robert Kanter  wrote:
>> 
>>> +1
>>> 
>>> Though it is kinda funny that the InterfaceStability annotation was
>> marked
>>> as Evolving :)
>>> @InterfaceStability.Evolving
>>> public class InterfaceStability {...}
>>> 
>>> 
>>> On Tue, Jan 13, 2015 at 2:21 PM, Ted Yu  wrote:
>>> 
 +1
 
 On Tue, Jan 13, 2015 at 1:47 PM, Abraham Elmahrek 
 wrote:
 
> Hey guys,
> 
> I've noticed the InterfaceStability (
> 
> 
 
>> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java
> )
> and InterfaceAudience (
> 
> 
 
>> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java
> )
> classes are marked as "Evolving". These really haven't changed much in
 the
> last few years, so I was wondering if it is reasonable to mark them as
> stable?
> 
> -Abe
> 
 
>> 
>> 
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.



Re: InterfaceStability and InterfaceAudience stability

2015-01-16 Thread Abraham Elmahrek
Agreed. Any one interested in reviewing/committing
https://issues.apache.org/jira/browse/HADOOP-11476?

On Thu, Jan 15, 2015 at 11:27 AM, Chris Nauroth 
wrote:

> Would it really be backwards-incompatible if we added new levels later?
> That seems counter-intuitive and contrary to this piece of documentation:
>
> http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html#jls-13.5.7
>
> Quoting:
>
> Annotation types behave exactly like any other interface. Adding or
> removing an element from an annotation type is analogous to adding or
> removing a method. There are important considerations governing other
> changes to annotation types, but these have no effect on the linkage of
> binaries by the Java Virtual Machine. Rather, such changes affect the
> behavior of reflective APIs that manipulate annotations. The documentation
> of these APIs specifies their behavior when various changes are made to the
> underlying annotation types.
>
> Adding or removing annotations has no effect on the correct linkage of the
> binary representations of programs in the Java programming language.
>
> Certainly removing existing levels would be backwards-incompatible.
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
> On Thu, Jan 15, 2015 at 6:14 AM, Allen Wittenauer 
> wrote:
>
> >
> > IIRC, it was marked as evolving because it wasn’t clear at the
> > time whether we would need to add more stability levels. (One of the key
> > inspirations for the stability levels—Sun’s ARC process—had more.)
> >
> > So I think it’s important to remember that if this gets changed
> to
> > stable, that effectively means it  new levels can’t really get added...
> >
> > On Jan 13, 2015, at 2:34 PM, Robert Kanter  wrote:
> >
> > > +1
> > >
> > > Though it is kinda funny that the InterfaceStability annotation was
> > marked
> > > as Evolving :)
> > > @InterfaceStability.Evolving
> > > public class InterfaceStability {...}
> > >
> > >
> > > On Tue, Jan 13, 2015 at 2:21 PM, Ted Yu  wrote:
> > >
> > >> +1
> > >>
> > >> On Tue, Jan 13, 2015 at 1:47 PM, Abraham Elmahrek 
> > >> wrote:
> > >>
> > >>> Hey guys,
> > >>>
> > >>> I've noticed the InterfaceStability (
> > >>>
> > >>>
> > >>
> >
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java
> > >>> )
> > >>> and InterfaceAudience (
> > >>>
> > >>>
> > >>
> >
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java
> > >>> )
> > >>> classes are marked as "Evolving". These really haven't changed much
> in
> > >> the
> > >>> last few years, so I was wondering if it is reasonable to mark them
> as
> > >>> stable?
> > >>>
> > >>> -Abe
> > >>>
> > >>
> >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


[jira] [Resolved] (HADOOP-10037) s3n read truncated, but doesn't throw exception

2015-01-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10037.
-
   Resolution: Cannot Reproduce
Fix Version/s: 2.6.0

closing as Cannot Reproduce, as it appears to have gone away for you.

# Hadoop 2.6 is using a much later version of jets3t
# Hadoop 2.6 also offers a (compatible) s3a fiesystem which uses the AWS SDK 
instead. 

If you do see this problem, try using s3a to see if it occurs there

> s3n read truncated, but doesn't throw exception 
> 
>
> Key: HADOOP-10037
> URL: https://issues.apache.org/jira/browse/HADOOP-10037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.0-alpha
> Environment: Ubuntu Linux 13.04 running on Amazon EC2 (cc2.8xlarge)
>Reporter: David Rosenstrauch
> Fix For: 2.6.0
>
> Attachments: S3ReadFailedOnTruncation.html, S3ReadSucceeded.html
>
>
> For months now we've been finding that we've been experiencing frequent data 
> truncation issues when reading from S3 using the s3n:// protocol.  I finally 
> was able to gather some debugging output on the issue in a job I ran last 
> night, and so can finally file a bug report.
> The job I ran last night was on a 16-node cluster (all of them AWS EC2 
> cc2.8xlarge machines, running Ubuntu 13.04 and Cloudera CDH4.3.0).  The job 
> was a Hadoop streaming job, which reads through a large number (i.e., 
> ~55,000) of files on S3, each of them approximately 300K bytes in size.
> All of the files contain 46 columns of data in each record.  But I added in 
> an extra check in my mapper code to count and verify the number of columns in 
> every record - throwing an error and crashing the map task if the column 
> count is wrong.
> If you look in the attached task logs, you'll see 2 attempts on the same 
> task.  The first one fails due to data truncated (i.e., my job intentionally 
> fails the map task due to the current record failing the column count check). 
>  The task then gets retried on a different machine and runs to a succesful 
> completion.
> You can see further evidence of the truncation further down in the task logs, 
> where it displays the count of the records read:  the failed task says 32953 
> records read, while the successful task says 63133.
> Any idea what the problem might be here and/or how to work around it?  This 
> issue is a very common occurrence on our clusters.  E.g., in the job I ran 
> last night before I had gone to bed I had already encountered 8 such 
> failuers, and the job was only 10% complete.  (~25,000 out of ~250,000 tasks.)
> I realize that it's common for I/O errors to occur - possibly even frequently 
> - in a large Hadoop job.  But I would think that if an I/O failure (like a 
> truncated read) did occur, that something in the underlying infrastructure 
> code (i.e., either in NativeS3FileSystem or in jets3t) should detect the 
> error and throw an IOException accordingly.  It shouldn't be up to the 
> calling code to detect such failures, IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9577) Actual data loss using s3n (against US Standard region)

2015-01-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9577.

Resolution: Won't Fix

I'm going to close this as something we don't currently plan to fix in the 
Hadoop core codebase, given that Netflix S3mper and EMR itself both offer a 
solution, namely support on Amazon Dynamo for a consistent metadata store.

The other way to get guaranteed create consistency is "don't use US East", 
which has no consistency guarantees —whereas everything else offers Create , 
but not Update or Delete

> Actual data loss using s3n (against US Standard region)
> ---
>
> Key: HADOOP-9577
> URL: https://issues.apache.org/jira/browse/HADOOP-9577
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 1.0.3
>Reporter: Joshua Caplan
>Priority: Critical
>
>  The implementation of needsTaskCommit() assumes that the FileSystem used for 
> writing temporary outputs is consistent.  That happens not to be the case 
> when using the S3 native filesystem in the US Standard region.  It is 
> actually quite common in larger jobs for the exists() call to return false 
> even if the task attempt wrote output minutes earlier, which essentially 
> cancels the commit operation with no error.  That's real life data loss right 
> there, folks.
> The saddest part is that the Hadoop APIs do not seem to provide any 
> legitimate means for the various RecordWriters to communicate with the 
> OutputCommitter.  In my projects I have created a static map of semaphores 
> keyed by TaskAttemptID, which all my custom RecordWriters have to be aware 
> of.  That's pretty lame.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-4436) S3 object names with arbitrary slashes confuse NativeS3FileSystem

2015-01-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-4436.

   Resolution: Won't Fix
Fix Version/s: 2.7.0

closing as WONTFIX...s3n, s3a and openstack clients all assume that a "/" paths 
represent directory delimiters in paths, an assumption that goes pretty deep. 
Sorry

> S3 object names with arbitrary slashes confuse NativeS3FileSystem
> -
>
> Key: HADOOP-4436
> URL: https://issues.apache.org/jira/browse/HADOOP-4436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.18.1
>Reporter: David Phillips
> Fix For: 2.7.0
>
>
> Consider a bucket with the following object names:
> * /
> * /foo
> * foo//bar
> NativeS3FileSystem treats an object named "/" as a directory.  Doing an "fs 
> -lsr" causes an infinite loop.
> I suggest we change NativeS3FileSystem to handle these by ignoring any such 
> "invalid" names.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11261) Set custom endpoint for S3A

2015-01-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11261.
-
   Resolution: Fixed
Fix Version/s: 2.7.0

> Set custom endpoint for S3A
> ---
>
> Key: HADOOP-11261
> URL: https://issues.apache.org/jira/browse/HADOOP-11261
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>  Labels: amazon, s3
> Fix For: 2.7.0
>
> Attachments: HADOOP-11261-2.patch, HADOOP-11261-3.patch, 
> JIRA-11261.patch
>
>
> Use a config setting to allow customizing the used AWS region. 
> It also enables using a custom url pointing to an S3-compatible object store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #1377

2015-01-16 Thread Apache Jenkins Server
See 

Changes:

[kasha] YARN-2217. [YARN-1492] Shared cache client side changes. (Chris Trezzo 
via kasha)

[aajisaka] HADOOP-11483. HardLink.java should use the jdk7 createLink method

[aajisaka] YARN-3005. [JDK7] Use switch statement for String instead of if-else 
statement in RegistrySecurity.java (Contributed by Kengo Seki)

[aw] HDFS-7581. HDFS documentation needs updating post-shell rewrite (aw)

[aajisaka] HADOOP-11318. Update the document for hadoop fs -stat

[arp] HDFS-7591. hdfs classpath command should support same options as hadoop 
classpath. (Contributed by Varun Saxena)

[jianhe] YARN-2861. Fixed Timeline DT secret manager to not reuse RM's configs. 
Contributed by Zhijie Shen

[rkanter] HADOOP-8757. Metrics should disallow names with invalid characters 
(rchiang via rkanter)

[kihwal] HDFS-7615. Remove longReadLock. Contributed by Kihwal Lee.

[kihwal] HDFS-7457. DatanodeID generates excessive garbage. Contributed by 
Daryn Sharp.

[wheat9] HADOOP-11350. The size of header buffer of HttpServer is too small 
when HTTPS is enabled. Contributed by Benoy Antony.

[yliu] HDFS-7189. Add trace spans for DFSClient metadata operations. (Colin P. 
McCabe via yliu)

[junping_du] YARN-3064. 
TestRMRestart/TestContainerResourceUsage/TestNodeManagerResync failure with 
allocation timeout. (Contributed by Jian He)

--
[...truncated 4753 lines...]
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.8 sec - in 
org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.bloom.TestBloomFilters
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.475 sec - in 
org.apache.hadoop.util.bloom.TestBloomFilters
Running org.apache.hadoop.util.TestStopWatch
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.util.TestStopWatch
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.164 sec - 
in org.apache.hadoop.util.TestMachineList
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.344 sec - in 
org.apache.hadoop.util.TestGenericsUtil
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.604 sec - in 
org.apache.hadoop.util.TestPureJavaCrc32
Running org.apache.hadoop.util.TestStringUtils
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in 
org.apache.hadoop.util.TestStringUtils
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.26 sec - in 
org.apache.hadoop.util.TestProtoUtil
Running org.apache.hadoop.util.TestSignalLogger
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.util.TestSignalLogger
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.173 sec - in 
org.apache.hadoop.util.TestDiskChecker
Running org.apache.hadoop.util.TestShutdownThreadsHelper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.179 sec - in 
org.apache.hadoop.util.TestShutdownThreadsHelper
Running org.apache.hadoop.util.TestCacheableIPList
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.283 sec - in 
org.apache.hadoop.util.TestCacheableIPList
Running org.apache.hadoop.util.TestLineReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.2 sec - in 
org.apache.hadoop.util.TestLineReader
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestClasspath
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.339 sec - in 
org.apache.hadoop.util.TestClasspath
Running org.apache.hadoop.util.TestApplicationClassLoader
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.242 sec - in 
org.apache.hadoop.util.TestApplicationClassLoader
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.241 sec - in 
org.apache.hadoop.util.TestShell
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in 
org.apache.hadoop.util.TestShutdownHookManager
Running org.apache.hadoop.util.TestHttpExceptionUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed