[jira] [Created] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE

2013-01-14 Thread Sarah Weissman (JIRA)
Sarah Weissman created HADOOP-9211:
--

 Summary: HADOOP_CLIENT_OPTS default setting fixes max heap size at 
128m, disregards HADOOP_HEAPSIZE
 Key: HADOOP-9211
 URL: https://issues.apache.org/jira/browse/HADOOP-9211
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.2-alpha
Reporter: Sarah Weissman


hadoop-env.sh as included in the 2.0.2alpha release tarball contains:
export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS"

This overrides any heap settings in HADOOP_HEAPSIZE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9210) bad mirror in download list

2013-01-14 Thread Andy Isaacson (JIRA)
Andy Isaacson created HADOOP-9210:
-

 Summary: bad mirror in download list
 Key: HADOOP-9210
 URL: https://issues.apache.org/jira/browse/HADOOP-9210
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Andy Isaacson
Priority: Minor


The http://hadoop.apache.org/releases.html page links to 
http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
mirrors.  The first one on the list (for me) is 
http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.

I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9209) Add shell command to dump file checksums

2013-01-14 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-9209:
---

 Summary: Add shell command to dump file checksums
 Key: HADOOP-9209
 URL: https://issues.apache.org/jira/browse/HADOOP-9209
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, tools
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Occasionally while working with tools like distcp, or debugging certain issues, 
it's useful to be able to quickly see the checksum of a file. We currently have 
the APIs to efficiently calculate a checksum, but we don't expose it to users. 
This JIRA is to add a "fs -checksum" command which dumps the checksum 
information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9208) Fix release audit warnings

2013-01-14 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-9208:
--

 Summary: Fix release audit warnings
 Key: HADOOP-9208
 URL: https://issues.apache.org/jira/browse/HADOOP-9208
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


The following files should be excluded from rat check:

./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.odg
./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: python streaming error

2013-01-14 Thread Andy Isaacson
Oh, another link I should have included!
http://blog.cloudera.com/blog/2013/01/a-guide-to-python-frameworks-for-hadoop/

-andy

On Mon, Jan 14, 2013 at 2:19 PM, Andy Isaacson  wrote:
> Hadoop Streaming does not magically teach Python open() how to read
> from "hdfs://" URLs. You'll need to use a library or fork a "hdfs dfs
> -cat" to read the file for you.
>
> A few links that may help:
>
> http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
> http://stackoverflow.com/questions/12485718/python-read-file-as-stream-from-hdfs
> https://bitbucket.org/turnaev/cyhdfs
>
> -andy
>
> On Sat, Jan 12, 2013 at 12:30 AM, springring  wrote:
>> Hi,
>>
>>  When I run code below as a streaming, the job error N/A and killed.  I 
>> run step by step, find it error when
>> " file_obj = open(file) " .  When I run same code outside of hadoop, 
>> everything is ok.
>>
>>   1 #!/bin/env python
>>   2
>>   3 import sys
>>   4
>>   5 for line in sys.stdin:
>>   6 offset,filename = line.split("\t")
>>   7 file = "hdfs://user/hdfs/catalog3/" + filename
>>   8 print line
>>   9 print filename
>>  10 print file
>>  11 file_obj = open(file)
>> ..
>>


Re: python streaming error

2013-01-14 Thread Andy Isaacson
Hadoop Streaming does not magically teach Python open() how to read
from "hdfs://" URLs. You'll need to use a library or fork a "hdfs dfs
-cat" to read the file for you.

A few links that may help:

http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
http://stackoverflow.com/questions/12485718/python-read-file-as-stream-from-hdfs
https://bitbucket.org/turnaev/cyhdfs

-andy

On Sat, Jan 12, 2013 at 12:30 AM, springring  wrote:
> Hi,
>
>  When I run code below as a streaming, the job error N/A and killed.  I 
> run step by step, find it error when
> " file_obj = open(file) " .  When I run same code outside of hadoop, 
> everything is ok.
>
>   1 #!/bin/env python
>   2
>   3 import sys
>   4
>   5 for line in sys.stdin:
>   6 offset,filename = line.split("\t")
>   7 file = "hdfs://user/hdfs/catalog3/" + filename
>   8 print line
>   9 print filename
>  10 print file
>  11 file_obj = open(file)
> ..
>


[jira] [Created] (HADOOP-9207) version info source checksum does not include all source files

2013-01-14 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9207:
-

 Summary: version info source checksum does not include all source 
files
 Key: HADOOP-9207
 URL: https://issues.apache.org/jira/browse/HADOOP-9207
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth


The build process takes an MD5 checksum of the source files in Common and YARN. 
 The HDFS version info command prints the checksum from Common.  The YARN 
version info command prints the checksum from YARN.  This is incomplete in that 
the HDFS source code is never included in the checksum, and 2 different YARN 
builds with the same YARN code but different Common code would have the same 
checksum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9206) "Setting up a Single Node Cluster" instructions need improvement in 0.23.5/2.0.2-alpha branches

2013-01-14 Thread Glen Mazza (JIRA)
Glen Mazza created HADOOP-9206:
--

 Summary: "Setting up a Single Node Cluster" instructions need 
improvement in 0.23.5/2.0.2-alpha branches
 Key: HADOOP-9206
 URL: https://issues.apache.org/jira/browse/HADOOP-9206
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.23.5, 2.0.2-alpha
Reporter: Glen Mazza


Hi, in contrast to the easy-to-follow 1.0.4 instructions 
(http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
2.0.2-alpha instructions 
(http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
 need more clarification -- it seems to be written for people who already know 
and understand hadoop.  In particular, these points need clarification:

1.) Text: "You should be able to obtain the MapReduce tarball from the release."

Question: What is the MapReduce tarball?  What is its name?  I don't see such 
an object within the hadoop-0.23.5.tar.gz download.

2.) Quote: "NOTE: You will need protoc installed of version 2.4.1 or greater."

Protoc doesn't have a website you can link to (it's just mentioned offhand when 
you Google it) -- is it really the case today that Hadoop has a dependency on 
such a minor project?  At any rate, if you can have a link of where one goes to 
get/install Protoc that would be good.

3.) Quote: "Assuming you have installed hadoop-common/hadoop-hdfs and exported 
$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set 
environment variable $HADOOP_MAPRED_HOME to the untarred directory."

I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean & (install both) or 
*or* just install one of the two?  This needs clarification--please remove the 
forward slash and replace it with what you're trying to say.  The audience here 
is complete newbie and they've been brought to this page from here: 
http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) (quote: 
"Getting Started - The Hadoop documentation includes the information you need 
to get started using Hadoop. Begin with the Single Node Setup which shows you 
how to set up a single-node Hadoop installation."), they've downloaded 
hadoop-0.23.5.tar.gz and want to know what to do next.  Why are there 
potentially two applications -- hadoop-common and hadoop-hdfs and not just one? 
 (The download doesn't appear to have two separate apps) -- if there is indeed 
just one app and we remove the other from the above text to avoid confusion?

Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more?  
If so, let us know in the docs here.

Also, the fragment: "Assuming you have installed hadoop-common/hadoop-hdfs..."  
No, I haven't, that's what *this* page is supposed to explain to me how to do 
-- how do I install these two (or just one of these two)?

Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to?

4.) Quote: "NOTE: The following instructions assume you have hdfs running."  
No, I don't--how do I do this?  Again, this page is supposed to teach me that.

5.) Quote: "To start the ResourceManager and NodeManager, you will have to 
update the configs. Assuming your $HADOOP_CONF_DIR is the configuration 
directory..."

Could you clarify here what the "configuration directory" is, it doesn't exist 
in the 0.23.5 download.  I just see bin,etc,include,lib,libexec,sbin,share 
folders but no "conf" one.)

6.) Quote: "Assuming that the environment variables $HADOOP_COMMON_HOME, 
$HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and 
$HADOOP_CONF_DIR have been set appropriately."

We'll need to know what to set YARN_HOME to here.

Thanks!
Glen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Hadoop datajoin package

2013-01-14 Thread Harsh J
Already done and available in trunk and 2.x releases today:
http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/


On Mon, Jan 14, 2013 at 7:44 PM, Hemanth Yamijala wrote:

> On the user list, there was a question about the Hadoop datajoin package.
> Specifically, its dependency on the old API.
>
> Is this package still in use ? Should we file a JIRA to migrate it to the
> new API ?
>
> Thanks
> hemanth
>



-- 
Harsh J


[jira] [Created] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-14 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9205:
--

 Summary: Java7: path to native libraries should be passed to tests 
via -Djava.library.path rather than env.LD_LIBRARY_PATH
 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9204) fix apacheds distribution download link URL

2013-01-14 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9204:
--

 Summary: fix apacheds distribution download link URL
 Key: HADOOP-9204
 URL: https://issues.apache.org/jira/browse/HADOOP-9204
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky


Apacheds server is used in some security tests in Hadoop-common, hadoop-hdfs 
modules with "startKdc" profile.
The build script downloads the server, unpacks it, configures, and runs.

The problem is that used URL 
http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
 does not work any more (returns 404).

The suggested patch peremetrizes the URL, so that it can be set in single palce 
in the parent pom.xml, and sets it to the workable value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8274) In pseudo or cluster model under Cygwin, tasktracker can not create a new job because of symlink problem.

2013-01-14 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8274.
-

Resolution: Won't Fix

For Windows, since the mainstream branch does not support it actively, am 
closing this as a Won't Fix.

I'm certain the same issue does not happen on the branch-1-win 1.x branch (or 
the branch-trunk-win branch), and I urge you to use that instead if you wish to 
continue using Windows for development or other usage. Find the 
Windows-optimized sources at 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-1-win/ or 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-trunk-win/.

> In pseudo or cluster model under Cygwin, tasktracker can not create a new job 
> because of symlink problem.
> -
>
> Key: HADOOP-8274
> URL: https://issues.apache.org/jira/browse/HADOOP-8274
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 1.0.0, 1.0.1, 0.22.0
> Environment: windows7+cygwin 1.7.11-1+jdk1.6.0_31+hadoop 1.0.0
>Reporter: tim.wu
>
> The standalone model is ok. But, in pseudo or cluster model, it always throw 
> errors, even I just run wordcount example.
> The HDFS works fine, but tasktracker can not create threads(jvm) for new job. 
>  It is empty under /logs/userlogs/job-/attempt-/.
> The reason looks like that in windows, Java can not recognize a symlink of 
> folder as a folder. 
> The detail description is as following,
> ==
> First, the error log of tasktracker is like:
> ==
> 12/03/28 14:35:13 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: 
> jvm_201203280212_0005_m_-1386636958
> 12/03/28 14:35:13 INFO mapred.JvmManager: JVM Runner 
> jvm_201203280212_0005_m_-1386636958 spawned.
> 12/03/28 14:35:17 INFO mapred.JvmManager: JVM Not killed 
> jvm_201203280212_0005_m_-1386636958 but just removed
> 12/03/28 14:35:17 INFO mapred.JvmManager: JVM : 
> jvm_201203280212_0005_m_-1386636958 exited with exit code -1. Number of tasks 
> it ran: 0
> 12/03/28 14:35:17 WARN mapred.TaskRunner: 
> attempt_201203280212_0005_m_02_0 : Child Error
> java.io.IOException: Task process exit with nonzero status of -1.
> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
> 12/03/28 14:35:21 INFO mapred.TaskTracker: addFreeSlot : current free slots : 
> 2
> 12/03/28 14:35:24 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): 
> attempt_201203280212_0005_m_02_1 task's state:UNASSIGNED
> 12/03/28 14:35:24 INFO mapred.TaskTracker: Trying to launch : 
> attempt_201203280212_0005_m_02_1 which needs 1 slots
> 12/03/28 14:35:24 INFO mapred.TaskTracker: In TaskLauncher, current free 
> slots : 2 and trying to launch attempt_201203280212_0005_m_02_1 which 
> needs 1 slots
> 12/03/28 14:35:24 WARN mapred.TaskLog: Failed to retrieve stdout log for 
> task: attempt_201203280212_0005_m_02_0
> java.io.FileNotFoundException: 
> D:\cygwin\home\timwu\hadoop-1.0.0\logs\userlogs\job_201203280212_0005\attempt_201203280212_0005_m_02_0\log.index
>  (The system cannot find the path specified)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.(FileInputStream.java:120)
> at 
> org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
> at 
> org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:188)
> at org.apache.hadoop.mapred.TaskLog$Reader.(TaskLog.java:423)
> at 
> org.apache.hadoop.mapred.TaskLogServlet.printTaskLog(TaskLogServlet.java:81)
> at 
> org.apache.hadoop.mapred.TaskLogServlet.doGet(TaskLogServlet.java:296)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mor

Build failed in Jenkins: Hadoop-Common-trunk #653

2013-01-14 Thread Apache Jenkins Server
See 

--
[...truncated 29816 lines...]
 [exec] 
 [exec] unpack-plugin:
 [exec] 
 [exec] install-plugin:
 [exec] 
 [exec] configure-plugin:
 [exec] 
 [exec] configure-output-plugin:
 [exec] Mounting output plugin: org.apache.forrest.plugin.output.pdf
 [exec] Processing 

 to 

 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginMountSnippet.xsl
 [exec] Moving 1 file to 

 [exec] 
 [exec] configure-plugin-locationmap:
 [exec] Mounting plugin locationmap for org.apache.forrest.plugin.output.pdf
 [exec] Processing 

 to 

 [exec] Loading stylesheet 
/home/jenkins/tools/forrest/latest/main/var/pluginLmMountSnippet.xsl
 [exec] Moving 1 file to 

 [exec] 
 [exec] init:
 [exec] 
 [exec] -prepare-classpath:
 [exec] 
 [exec] check-contentdir:
 [exec] 
 [exec] examine-proj:
 [exec] 
 [exec] validation-props:
 [exec] Using these catalog descriptors: 
/home/jenkins/tools/forrest/latest/main/webapp/resources/schema/catalog.xcat:/home/jenkins/tools/forrest/latest/build/plugins/catalog.xcat:
 [exec] 
 [exec] validate-xdocs:
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] Warning: 

 not found.
 [exec] 7 file(s) have been successfully validated.
 [exec] ...validated xdocs
 [exec] 
 [exec] validate-skinconf:
 [exec] 1 file(s) have been successfully validated.
 [exec] ...validated skinconf
 [exec] 
 [exec] validate-sitemap:
 [exec] 
 [exec] validate-skins-stylesheets:
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 

 [exec] Copying main skin images to site ...
 [exec] Created dir: 

 [exec] Copying 20 files to 

 [exec] Copying 14 files to 

 [exec] Copying project skin images to site ...
 [exec] Copying main skin css and js files to site ...
 [exec] Copying 11 files to 


Build failed in Jenkins: Hadoop-Common-0.23-Build #494

2013-01-14 Thread Apache Jenkins Server
See 

--
[...truncated 10346 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.812 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.88 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.11 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.23 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.428 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.075 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.272 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.491 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.449 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.286 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.47 sec
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.534 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.647 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.203 sec
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.886 sec
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.482 sec
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.621 sec
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.824 sec
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec
Running org.apache.hadoop.io.TestText
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.883 sec
Running org.apache.hadoop.io.TestMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec
Running org.apache.hadoop.io.compress.TestCodecFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.376 sec
Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec
Running org.apache.hadoop.io.compress.TestCodec
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 59.852 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.396 sec
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.564 sec
Running org.apache.hadoop.io.TestWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.132 sec
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0,