[jira] [Created] (HADOOP-11402) Negative user-to-group cache entries are never cleared for never-again-accessed users

2014-12-12 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11402:
-

 Summary: Negative user-to-group cache entries are never cleared 
for never-again-accessed users
 Key: HADOOP-11402
 URL: https://issues.apache.org/jira/browse/HADOOP-11402
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


Negative user-to-group cache entries are never cleared for never-again-accessed 
users.  We should have a background thread that runs very infrequently and 
removes these expired entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Solaris Port

2014-12-12 Thread Colin McCabe
Just use snprintf to copy the error message from strerror_r into a
thread-local buffer of 64 bytes or so.  Then preserve the existing
terror() interface.

Can you open a jira for this?

best,
Colin

On Thu, Dec 11, 2014 at 8:35 PM, malcolm  wrote:
> So, turns out that if I had naively changed all calls to terror or
> references to sys_errlist, to using strerror_r, then I would have broken
> code for Windows and HPUX (and possibly other OSes).
>
> If we are to assume that current code runs fine on all platforms (maybe even
> AIX an MacOS, for example), then any change/additions made to the code and
> not ifdeffed appropriately can break on other OSes. On the other hand,  too
> many ifdefs can pollute the code source and render it less readable (though
> possibly less important).
>
> In the general case what are code contributors responsibilities to adding
> code regarding OSes besides Linux ?
> What OSes does jenkins test on ?
> I guess maintainers of code on non-tested platforms are responsible for
> their own testing ?
>
> How do we avoid the ping-pong effect, i.e. I make a generic change to code
> which breaks on Windows, then the Windows maintainer reverts changes to
> break on Solaris for example ? Or does this not happen in actuality ?
>
>
> On 12/11/2014 11:25 PM, Asokan, M wrote:
>>
>> Hi Malcom,
>>The Windows versions of strerror() and strerror_s() functions are
>> probably meant for ANSI C library functions that set errno.  For core
>> Windows API calls (like UNIX system calls), one gets the error number by
>> calling GetLastError() function.  In the code snippet I sent earlier, the
>> "code" argument is the value returned by GetLastError().  Neither strerror()
>> nor strerror_s() will give the correct error message for this error code.
>>
>> You could probably look at libwinutils.c in Hadoop source.  It uses
>> FormatMessageW (which returns messages in Unicode.)  My requirement was to
>> return messages in current system locale.
>>
>> -- Asokan
>> 
>> From: malcolm [malcolm.kaval...@oracle.com]
>> Sent: Thursday, December 11, 2014 4:04 PM
>> To: common-dev@hadoop.apache.org
>> Subject: Re: Solaris Port
>>
>> Hi Asok,
>>
>> I googled and found that windows has strerror, and strerror_s (which is
>> the strerror_r equivalent).
>> Is there a reason why you didn't use this call ?
>>
>> On 12/11/2014 06:27 PM, Asokan, M wrote:
>>>
>>> Hi Malcom,
>>>  Recently, I had to work on a function to get system error message on
>>> various systems.  Here is the piece of code I came up with.  Hope it helps.
>>>
>>> static void get_system_error_message(char * buf, int buf_len, int code)
>>> {
>>> #if defined(_WIN32)
>>>   LPVOID lpMsgBuf;
>>>   DWORD status = FormatMessage(FORMAT_MESSAGE_ALLOCATE_BUFFER |
>>>FORMAT_MESSAGE_FROM_SYSTEM |
>>>FORMAT_MESSAGE_IGNORE_INSERTS,
>>>NULL, code,
>>>MAKELANGID(LANG_NEUTRAL,
>>> SUBLANG_DEFAULT),
>>>/* Default
>>> language */
>>>(LPTSTR) &lpMsgBuf, 0, NULL);
>>>   if (status > 0)
>>>   {
>>>   strncpy(buf, (char *)lpMsgBuf, buf_len-1);
>>>   buf[buf_len-1] = '\0';
>>>   /* Free the buffer returned by system */
>>>   LocalFree(lpMsgBuf);
>>>   }
>>>   else
>>>   {
>>>   _snprintf(buf, buf_len-1 , "%s %d",
>>>   "Can't get system error message for code", code);
>>>   buf[buf_len-1] = '\0';
>>>   }
>>> #else
>>> #if defined(_HPUX_SOURCE)
>>>   {
>>>   char * msg;
>>>   errno = 0;
>>>   msg = strerror(code);
>>>   if (errno == 0)
>>>   {
>>>   strncpy(buf, msg, buf_len-1);
>>>   buf[buf_len-1] = '\0';
>>>   }
>>>   else
>>>   {
>>>   snprintf(buf, buf_len, "%s %d",
>>>   "Can't get system error message for code", code);
>>>   }
>>>   }
>>> #else
>>>   if (strerror_r(code, buf, buf_len) != 0)
>>>   {
>>>   snprintf(buf, buf_len, "%s %d",
>>>   "Can't get system error message for code", code);
>>>   }
>>> #endif
>>> #endif
>>> }
>>>
>>> Note that HPUX does not have strerror_r() since strerror() itself is
>>> thread-safe.  Also Windows does not have snprintf().  The equivalent
>>> function _snprintf() has a subtle difference in its interface.
>>>
>>> -- Asokan
>>> 
>>> From: malcolm [malcolm.kaval...@oracle.com]
>>> Sent: Thursday, December 11, 2014 11:02 AM
>>> To: common-dev@hadoop.apache.org
>>> Subject: Re: Solaris Port
>>>
>>> Fine with me, I volunteer to do this, if accepted.
>>>
>>> On 12/11/2014 05:48 PM, Allen Wittenauer wrote:

 sys_errlist was removed for a reason.  Creating a fake sys_errlis

Build failed in Jenkins: Hadoop-common-trunk-Java8 #41

2014-12-12 Thread Apache Jenkins Server
See 

Changes:

[stevel]  YARN-2912 Jersey Tests failing with port in use. (varun saxena via 
stevel)

--
[...truncated 8786 lines...]
[WARNING] ^
[WARNING] 
:93:
 warning: no @param for xdr
[WARNING] public NFS3Response commit(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:93:
 warning: no @param for info
[WARNING] public NFS3Response commit(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:93:
 warning: no @return
[WARNING] public NFS3Response commit(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:126:
 warning: no @return
[WARNING] public String getProgram() {
[WARNING] ^
[WARNING] 
:131:
 warning: no @param for clientId
[WARNING] public void callCompleted(InetAddress clientId, int xid, RpcResponse 
response) {
[WARNING] ^
[WARNING] 
:131:
 warning: no @param for xid
[WARNING] public void callCompleted(InetAddress clientId, int xid, RpcResponse 
response) {
[WARNING] ^
[WARNING] 
:131:
 warning: no @param for response
[WARNING] public void callCompleted(InetAddress clientId, int xid, RpcResponse 
response) {
[WARNING] ^
[WARNING] 
:144:
 warning: no @param for clientId
[WARNING] public CacheEntry checkOrAddToCache(InetAddress clientId, int xid) {
[WARNING] ^
[WARNING] 
:144:
 warning: no @param for xid
[WARNING] public CacheEntry checkOrAddToCache(InetAddress clientId, int xid) {
[WARNING] ^
[WARNING] 
:144:
 warning: no @return
[WARNING] public CacheEntry checkOrAddToCache(InetAddress clientId, int xid) {
[WARNING] ^
[WARNING] 
:158:
 warning: no @return
[WARNING] public int size() {
[WARNING] ^
[WARNING] 
:91:
 warning: no @param for transport
[WARNING] public void register(int transport, int boundPort) {
[WARNING] ^
[WARNING] 
:91:
 warning: no @param for boundPort
[WARNING] public void register(int transport, int boundPort) {
[WARNING] ^
[WARNING] 
:108:
 warning: no @param for transport
[WARNING] public void unregister(int transport, int boundPort) {
[WARNING] ^
[WARNING] 
:108:
 warning: no @param for boundPort
[WARNING] public void unregister(int transport, int boundPort) {
[WARNING] ^
[WARNING] 
:125:
 warning: no @param for mapEntry
[WARNING] protected void register(PortmapMapping mapEntry, boolean set) {
[WARNING] ^
[WARNING] 
:125:
 warning: no @param for set
[WARNING] protected void register(PortmapMapping mapEntry, boolean set) {
[WARNING] ^
[WARNING] 


Re: Build failed in Jenkins: Hadoop-common-trunk-Java8 #40

2014-12-12 Thread Steve Loughran
On 12 December 2014 at 09:33, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> [INFO]
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Annotations . SUCCESS [
> 7.398 s]
> [INFO] Apache Hadoop MiniKDC . SUCCESS [
> 12.289 s]
> [INFO] Apache Hadoop Auth  SUCCESS [04:45
> min]
> [INFO] Apache Hadoop Auth Examples ... SUCCESS [
> 3.718 s]
> [INFO] Apache Hadoop Common .. SUCCESS [24:07
> min]
> [INFO] Apache Hadoop NFS . SUCCESS [
> 13.094 s]
> [INFO] Apache Hadoop KMS . SUCCESS [01:09
> min]
> [INFO] Apache Hadoop Common Project .. SUCCESS [
> 0.048 s]
> [INFO]
> 
> [INFO] BUILD SUCCESS
> [INFO]
> 
> [INFO] Total time: 30:40 min
> [INFO] Finished at: 2014-12-12T09:32:48+00:00
> [INFO] Final Memory: 95M/1097M
> [INFO]
> 
> Build step 'Execute shell' marked build as failure
> Archiving artifacts
> No prior successful build to compare, so performing full copy of artifacts
> Recording test results
>

this run was actually a success; its just being reported as a failure.

I think it may be findbugs & checkstyle, so I'm turning this off.

key point: all Hadoop tests just passed on java 8.

common trunk is jittering a bit with a test I'd like to pull as it's just
some OS execs() for debugging failing :
https://builds.apache.org/job/Hadoop-Common-trunk/1341/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-11401) Cannot find link to SCM on website

2014-12-12 Thread Sebb (JIRA)
Sebb created HADOOP-11401:
-

 Summary: Cannot find link to SCM on website
 Key: HADOOP-11401
 URL: https://issues.apache.org/jira/browse/HADOOP-11401
 Project: Hadoop Common
  Issue Type: Bug
  Components: site
Reporter: Sebb


There does not appear to be a link to the SCM on the website.
Nor does there appear to be a developer's page / getting involved page, where 
this might also appear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


request for reviewer : patch HADOOP-10420

2014-12-12 Thread Gil Vernik
Hi All,

I would like to ask someone to review the patch 
https://issues.apache.org/jira/browse/HADOOP-10420.
This patch was submitted long time ago and still doesn't assigned to 
anyone.

Is there some committer  who can review this patch and than merge it into 
trunk ? ( assuming all issues will be implemented )
This patch contains small extension to 
https://issues.apache.org/jira/browse/HADOOP-8545 ( was merged into 2.3.0 
)
This  extension is mandatory to connect Hadoop with Swift object store 
that is configured with  v1.0 authentication model.

Thanks,
Gil Vernk.

[jira] [Created] (HADOOP-11400) GraphiteSink does not reconnect to Graphite after 'broken pipe'

2014-12-12 Thread Kamil Gorlo (JIRA)
Kamil Gorlo created HADOOP-11400:


 Summary: GraphiteSink does not reconnect to Graphite after 'broken 
pipe'
 Key: HADOOP-11400
 URL: https://issues.apache.org/jira/browse/HADOOP-11400
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.5.1
Reporter: Kamil Gorlo


I see that after network error GraphiteSink does not reconnects to Graphite 
server and in effect metrics are not sent. 

Here is stacktrace I see (this is from nodemanager):

2014-12-11 16:39:21,655 ERROR 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Got sink exception, retry 
in 4806ms
org.apache.hadoop.metrics2.MetricsException: Error flushing metrics
at 
org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:120)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at 
org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:129)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at 
org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:118)
... 5 more
2014-12-11 16:39:26,463 ERROR 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Got sink exception and over 
retry limit, suppressing further error messages
org.apache.hadoop.metrics2.MetricsException: Error flushing metrics
at 
org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:120)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at 
org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:129)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at 
org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:118)
... 5 more

GraphiteSinkFixed.java is simply GraphiteSink.java from Hadoop 2.6.0 (with 
fixed https://issues.apache.org/jira/browse/HADOOP-11182) because I cannot 
simply upgrade Hadoop (I am using CDH5).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #40

2014-12-12 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-11353. Add support for .hadooprc (aw)

[jianhe] YARN-2917. Fixed potential deadlock when system.exit is called in 
AsyncDispatcher. Contributed by Rohith Sharmaks

[wheat9] HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by 
Haohui Mai.

[gera] HADOOP-11211. mapreduce.job.classloader.system.classes semantics should 
be order-independent. (Yitong Zhou via gera)

[brandonli] HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li

[wheat9] HADOOP-11389. Clean up byte to string encoding issues in 
hadoop-common. Contributed by Haohui Mai.

[wang] HDFS-7497. Inconsistent report of decommissioning DataNodes between 
dfsadmin and NameNode webui. Contributed by Yongjun Zhang.

[devaraj] MAPREDUCE-6046. Change the class name for logs in RMCommunicator.

[devaraj] YARN-2243. Order of arguments for Preconditions.checkNotNull() is 
wrong in

--
[...truncated 74935 lines...]
Setting project property: findbugs.version -> 3.0.0
Setting project property: maven-failsafe-plugin.version -> 2.17
Setting project property: tomcat.version -> 6.0.41
Setting project property: distMgmtStagingUrl -> 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: jackson2.version -> 2.2.3
Setting project property: test.build.data -> 

Setting project property: protobuf.version -> 2.5.0
Setting project property: distMgmtSnapshotsName -> Apache Development Snapshot 
Repository
Setting project property: maven.test.redirectTestOutputToFile -> true
Setting project property: protoc.path -> ${env.HADOOP_PROTOC_PATH}
Setting project property: distMgmtSnapshotsUrl -> 
https://repository.apache.org/content/repositories/snapshots
Setting project property: project.reporting.outputEncoding -> UTF-8
Setting project property: testsThreadCount -> 4
Setting project property: enforced.java.version -> [1.7,)
Setting project property: build.platform -> Linux-amd64-64
Setting project property: distMgmtStagingName -> Apache Release Distribution 
Repository
Setting project property: failIfNoTests -> false
Setting project property: test.exclude -> _
Setting project property: jersey.version -> 1.9
Setting project property: hadoop.common.build.dir -> 

Setting project property: java.security.egd -> file:///dev/urandom
Setting project property: javac.version -> 1.7
Setting project property: test.exclude.pattern -> _
Setting project property: test.build.dir -> 

Setting project property: zookeeper.version -> 3.4.6
Setting project property: maven-surefire-plugin.version -> 2.17
Setting project property: ant.file -> 

[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId -> org.apache.hadoop
Setting project property: project.artifactId -> hadoop-common-project
Setting project property: project.name -> Apache Hadoop Common Project
Setting project property: project.description -> Apache Hadoop Common Project
Setting project property: project.version -> 3.0.0-SNAPSHOT
Setting project property: project.packaging -> pom
Setting project property: project.build.directory -> 

Setting project property: project.build.outputDirectory -> 

Setting project property: project.build.testOutputDirectory -> 

Setting project property: project.build.sourceDirectory -> 

Setting project property: project.build.testSourceDirectory -> 

Setting project property: localRepository ->id: local
  url: file:///home/jenkins/.m2/repository/
   layout: default
snapshots: [enabled => true, update => always]
 releases: [enabled => true, update => always]
Setting project property: settings.localRepository -> 
/home/jenkins/.m2/repository
Setting project property: maven.project.dependencies.versions -> 
[INFO] Executing tasks
Build sequence for target(s) `main' is [main]
Complete build sequence is [main, ]

main:
[mkdir] Created dir: 

[mkdir] Skipping 


Build failed in Jenkins: Hadoop-Common-trunk #1341

2014-12-12 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-11353. Add support for .hadooprc (aw)

[jianhe] YARN-2917. Fixed potential deadlock when system.exit is called in 
AsyncDispatcher. Contributed by Rohith Sharmaks

[wheat9] HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by 
Haohui Mai.

[gera] HADOOP-11211. mapreduce.job.classloader.system.classes semantics should 
be order-independent. (Yitong Zhou via gera)

[brandonli] HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li

[wheat9] HADOOP-11389. Clean up byte to string encoding issues in 
hadoop-common. Contributed by Haohui Mai.

[wang] HDFS-7497. Inconsistent report of decommissioning DataNodes between 
dfsadmin and NameNode webui. Contributed by Yongjun Zhang.

[devaraj] MAPREDUCE-6046. Change the class name for logs in RMCommunicator.

[devaraj] YARN-2243. Order of arguments for Preconditions.checkNotNull() is 
wrong in

--
[...truncated 4729 lines...]
Running org.apache.hadoop.metrics2.sink.TestFileSink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.461 sec - in 
org.apache.hadoop.metrics2.sink.TestFileSink
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.768 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.282 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsConfig
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.238 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.432 sec - in 
org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Running org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.387 sec - in 
org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.491 sec - in 
org.apache.hadoop.metrics2.impl.TestSinkQueue
Running org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.444 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.386 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.471 sec - in 
org.apache.hadoop.metrics2.lib.TestMutableMetrics
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.386 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.454 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.metrics2.lib.TestUniqNames
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in 
org.apache.hadoop.metrics2.lib.TestInterns
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.536 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.392 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.318 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.515 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.574 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.452 sec - in 
org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestReconfiguration
Tes