[jira] [Created] (MAPREDUCE-3582) Move successfully passing MR1 tests to MR2 maven tree.

2011-12-20 Thread Ahmed Radwan (Created) (JIRA)
Move successfully passing MR1 tests to MR2 maven tree.
--

 Key: MAPREDUCE-3582
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3582
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, test
Reporter: Ahmed Radwan
Assignee: Ahmed Radwan


This ticket will track moving mr1 tests that are passing successfully to mr2 
maven tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: mysterious NumberFormatException

2011-12-20 Thread Tsz Wo Sze
Hi Ted,


Use String is a good idea.  Or we may use Long but we have to be more careful 
to parse 64-bit integers (use negative long when = 2^63.)  Either way is fine.


Could you file a JIRA?  Thanks for catching the bug.


Nicholas




 From: Ted Yu yuzhih...@gmail.com
To: mapreduce-dev@hadoop.apache.org; Tsz Wo Sze szets...@yahoo.com 
Cc: giridharan kesavan gkesa...@hortonworks.com 
Sent: Monday, December 19, 2011 11:20 PM
Subject: Re: mysterious NumberFormatException
 
Thanks for the analysis, Nicolas.

Is it reasonable to change allProcessInfo to MapString, ProcessInfo so
that we don't encounter this problem by avoiding parsing large integer ?

On Mon, Dec 19, 2011 at 9:59 PM, Tsz Wo Sze szets...@yahoo.com wrote:

 Hi,

 It looks like that the ppid is a 64-bit positive integer but Java long is
 signed and so only works with 63-bit positive integers.  In your case,

   2^64  18446743988060683582  2^63.

 Therefore, there is a NFE.  I think it is a bug in ProcfsBasedProcessTree.


 Regards,

 Nicholas Sze



 
  From: Ted Yu yuzhih...@gmail.com
 To: mapreduce-dev@hadoop.apache.org
 Cc: giridharan kesavan gkesa...@hortonworks.com
 Sent: Monday, December 19, 2011 8:24 PM
 Subject: mysterious NumberFormatException

 Hi,
 HBase PreCommit builds frequently gave us mysterious NumberFormatException

 From

 https://builds.apache.org/job/PreCommit-HBASE-Build/553//testReport/org.apache.hadoop.hbase.mapreduce/TestHFileOutputFormat/testMRIncrementalLoad/
 :

 2011-12-20
 01:44:01,180 WARN  [main] mapred.JobClient(784): No job jar
 file set.  User classes may not be found. See JobConf(Class) or
 JobConf#setJar(String).
 java.lang.NumberFormatException: For input string: 18446743988060683582
     at
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
     at java.lang.Long.parseLong(Long.java:422)
     at java.lang.Long.parseLong(Long.java:468)
     at
 org.apache.hadoop.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:413)
     at
 org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:148)
     at
 org.apache.hadoop.util.LinuxResourceCalculatorPlugin.getProcResourceValues(LinuxResourceCalculatorPlugin.java:401)
     at
 org.apache.hadoop.mapred.Task.initialize(Task.java:536)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
     at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java:396)
     at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
     at org.apache.hadoop.mapred.Child.main(Child.java:249)

 From hadoop 0.20.205 source code, looks like ppid was 18446743988060683582,
 causing NFE:
         // Set (name) (ppid) (pgrpId) (session) (utime) (stime) (vsize)
 (rss)
          pinfo.updateProcessInfo(m.group(2), Integer.parseInt(m.group(3)),

 You can find information on the OS at the
 beginning of
 https://builds.apache.org/job/PreCommit-HBASE-Build/553/console:

 asf011.sp2.ygridcore.net
 Linux asf011.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul
 20 17:42:25 UTC 2011 x86_64 GNU/Linux
 core file size          (blocks, -c) 0
 data seg size           (kbytes, -d) unlimited
 scheduling priority             (-e) 20
 file size               (blocks, -f) unlimited
 pending signals                 (-i) 16382
 max locked memory       (kbytes, -l) 64
 max memory size         (kbytes, -m) unlimited
 open files             
         (-n) 6
 pipe size            (512 bytes, -p) 8
 POSIX message queues     (bytes, -q) 819200
 real-time priority              (-r) 0
 stack size              (kbytes, -s) 8192
 cpu time               (seconds, -t) unlimited
 max user processes              (-u) 2048
 virtual memory          (kbytes, -v) unlimited
 file locks                      (-x) unlimited
 6
 Running in Jenkins mode

 Your insight is welcome.


[jira] [Created] (MAPREDUCE-3583) ProcfsBasedProcessTree#constructProcessInfo() may throw NumberFormatException

2011-12-20 Thread Zhihong Yu (Created) (JIRA)
ProcfsBasedProcessTree#constructProcessInfo() may throw NumberFormatException
-

 Key: MAPREDUCE-3583
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3583
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0
 Environment: 64-bit Linux:
asf011.sp2.ygridcore.net
Linux asf011.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul 20 
17:42:25 UTC 2011 x86_64 GNU/Linux
Reporter: Zhihong Yu


HBase PreCommit builds frequently gave us NumberFormatException.

From 
https://builds.apache.org/job/PreCommit-HBASE-Build/553//testReport/org.apache.hadoop.hbase.mapreduce/TestHFileOutputFormat/testMRIncrementalLoad/:
{code}
2011-12-20 01:44:01,180 WARN  [main] mapred.JobClient(784): No job jar file 
set.  User classes may not be found. See JobConf(Class) or 
JobConf#setJar(String).
java.lang.NumberFormatException: For input string: 18446743988060683582
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Long.parseLong(Long.java:422)
at java.lang.Long.parseLong(Long.java:468)
at 
org.apache.hadoop.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:413)
at 
org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:148)
at 
org.apache.hadoop.util.LinuxResourceCalculatorPlugin.getProcResourceValues(LinuxResourceCalculatorPlugin.java:401)
at org.apache.hadoop.mapred.Task.initialize(Task.java:536)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
{code}
From hadoop 0.20.205 source code, looks like ppid was 18446743988060683582, 
causing NFE:
{code}
// Set (name) (ppid) (pgrpId) (session) (utime) (stime) (vsize) (rss)
 pinfo.updateProcessInfo(m.group(2), Integer.parseInt(m.group(3)),
{code}
You can find information on the OS at the beginning of 
https://builds.apache.org/job/PreCommit-HBASE-Build/553/console:
{code}
asf011.sp2.ygridcore.net
Linux asf011.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul 20 
17:42:25 UTC 2011 x86_64 GNU/Linux
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 20
file size   (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 2048
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
6
Running in Jenkins mode
{code}

From Nicolas Sze:
{noformat}
It looks like that the ppid is a 64-bit positive integer but Java long is 
signed and so only works with 63-bit positive integers.  In your case,

  2^64  18446743988060683582  2^63.

Therefore, there is a NFE. 
{noformat}

I propose changing allProcessInfo to MapString, ProcessInfo so that we don't 
encounter this problem by avoiding parsing large integer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3584) streaming.jar -file packaging forgets timestamps

2011-12-20 Thread Dieter Plaetinck (Created) (JIRA)
streaming.jar -file packaging forgets timestamps


 Key: MAPREDUCE-3584
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3584
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Dieter Plaetinck


When invoking hadoop jar 
/usr/local/hadoop/contrib/streaming/hadoop-0.20.2-streaming.jar -file files,
hadoop will package the files files, but it will forget about their 
timestamps.
After the files are unpacked in 
tmp_dir/mapred/local/taskTracker/jobcache/job_$job/jars, all files will have 
the timestamps of when the files were unpacked.
The problem is that this way meaningful information is lost.
For example in my case i ship some files along with my job, and I need to 
compare the age (mtime) of 2 files and rebuild one of them if it's too old,
but because of this hadoop behavior, my logic breaks.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




JAXB / Guice errors

2011-12-20 Thread Ravi Prakash
Hi,

Is anyone seeing these errors when they try to access the RM Web UI?

HTTP ERROR 500

Problem accessing /. Reason:

Guice provision errors:

1) Error injecting constructor, java.lang.LinkageError: JAXB 2.1 API is
being loaded from the bootstrap classloader, but this RI (from
jar:file:somePath/hadoop-0.23.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class)
needs 2.2 API. Use the endorsed directory mechanism to place jaxb-api.jar
in the bootstrap classloader. (See
http://java.sun.com/j2se/1.6.0/docs/guide/standards/)
  at
org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.init(JAXBContextResolver.java:60)
  at
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:45)
  while locating
org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver

1 error

Anyone fix it yet?

Cheers
Ravi.


Re: JAXB / Guice errors

2011-12-20 Thread Vinod Kumar Vavilapalli
Can you please open a ticket? It must be related to MAPREDUCE-2863 . Thomas
can help with this.

Thanks,
+Vinod


On Tue, Dec 20, 2011 at 10:09 AM, Ravi Prakash ravihad...@gmail.com wrote:

 Hi,

 Is anyone seeing these errors when they try to access the RM Web UI?

 HTTP ERROR 500

 Problem accessing /. Reason:

Guice provision errors:

 1) Error injecting constructor, java.lang.LinkageError: JAXB 2.1 API is
 being loaded from the bootstrap classloader, but this RI (from

 jar:file:somePath/hadoop-0.23.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class)
 needs 2.2 API. Use the endorsed directory mechanism to place jaxb-api.jar
 in the bootstrap classloader. (See
 http://java.sun.com/j2se/1.6.0/docs/guide/standards/)
  at

 org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.init(JAXBContextResolver.java:60)
  at

 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:45)
  while locating
 org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver

 1 error

 Anyone fix it yet?

 Cheers
 Ravi.



[jira] [Created] (MAPREDUCE-3585) RM unable to detect NMs restart

2011-12-20 Thread Bh V S Kamesh (Created) (JIRA)
RM unable to detect NMs restart
---

 Key: MAPREDUCE-3585
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3585
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Reporter: Bh V S Kamesh


Suppose say in a single host, there have been multiple NMs configured. In this 
case, there should be mechanism to detect the NMs comeback.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3575) Streaming/tools Jar does not get included in the tarball.

2011-12-20 Thread Eli Collins (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved MAPREDUCE-3575.


Resolution: Duplicate

 Streaming/tools Jar does not get included in the tarball.
 -

 Key: MAPREDUCE-3575
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3575
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Reporter: Mahadev konar
Priority: Blocker
 Fix For: 0.23.1


 The streaming jar used to be available in the mapreduce tarballs before we 
 created the hadoop-tools package. The streaming and tools jars are not being 
 shipped with any tars. Our mapreduce tarballs should include the streaming 
 and tools jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




setting up Eclipse environment for 0.20.205

2011-12-20 Thread Ted Yu
Hi,
I think the following wiki is for hadoop TRUNK:
http://wiki.apache.org/hadoop/EclipseEnvironment

Is there wiki for 0.20.205 ?

I ran 'ant eclipse' and imported hadoop into Eclipse.
When I tried to run TestProcfsBasedProcessTree, I got:

Class not found org.apache.hadoop.util.TestProcfsBasedProcessTree
java.lang.ClassNotFoundException:
org.apache.hadoop.util.TestProcfsBasedProcessTree
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.loadClass(RemoteTestRunner.java:693)
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.loadClasses(RemoteTestRunner.java:429)
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:452)
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

Can someone let me know which step I missed ?

Thanks


Re: JAXB / Guice errors

2011-12-20 Thread Ravi Prakash
Hi Vinod,

I solved my issue. I had a stale version of java pointed to by JAVA_HOME.
$ ./jdk1.6.0_01/bin/java -version
java version 1.6.0_01
Java(TM) SE Runtime Environment (build 1.6.0_01-b06)
Java HotSpot(TM) Server VM (build 1.6.0_01-b06, mixed mode)

Updating to $ ./jdk1.6.0_30/bin/java -version
java version 1.6.0_30
Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Java HotSpot(TM) Server VM (build 20.5-b03, mixed mode)

fixed the problem

Thanks
Ravi


On Tue, Dec 20, 2011 at 12:13 PM, Vinod Kumar Vavilapalli 
vino...@hortonworks.com wrote:

 Can you please open a ticket? It must be related to MAPREDUCE-2863 . Thomas
 can help with this.

 Thanks,
 +Vinod


 On Tue, Dec 20, 2011 at 10:09 AM, Ravi Prakash ravihad...@gmail.com
 wrote:

  Hi,
 
  Is anyone seeing these errors when they try to access the RM Web UI?
 
  HTTP ERROR 500
 
  Problem accessing /. Reason:
 
 Guice provision errors:
 
  1) Error injecting constructor, java.lang.LinkageError: JAXB 2.1 API is
  being loaded from the bootstrap classloader, but this RI (from
 
 
 jar:file:somePath/hadoop-0.23.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class)
  needs 2.2 API. Use the endorsed directory mechanism to place jaxb-api.jar
  in the bootstrap classloader. (See
  http://java.sun.com/j2se/1.6.0/docs/guide/standards/)
   at
 
 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.init(JAXBContextResolver.java:60)
   at
 
 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:45)
   while locating
  org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
 
  1 error
 
  Anyone fix it yet?
 
  Cheers
  Ravi.
 



[jira] [Created] (MAPREDUCE-3586) Lots of AMs hanging around in PIG testing

2011-12-20 Thread Vinod Kumar Vavilapalli (Created) (JIRA)
Lots of AMs hanging around in PIG testing
-

 Key: MAPREDUCE-3586
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3586
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker
 Fix For: 0.23.1


[~daijy] found this. Here's what he says:
bq. I see hundreds of MRAppMaster process on my machine, and lots of tests fail 
for Too many open files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3587) The deployment tarball should have different directories for yarn jars and mapreduce jars.

2011-12-20 Thread Mahadev konar (Created) (JIRA)
The deployment tarball should have different directories for yarn jars and 
mapreduce jars.
--

 Key: MAPREDUCE-3587
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3587
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Mahadev konar
Assignee: Mahadev konar


Currently all the jars in the mr tarball go to share/hadoop/mapreduce. The jars 
should be split into: share/hadoop/yarn and share/hadoop/mapreduce for clear 
seperation between yarn framework and mr.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2972) Running commands from the hadoop-mapreduce-test-*.jar fails with ClassNotFoundException: junit.framework.TestCase

2011-12-20 Thread John George (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George resolved MAPREDUCE-2972.


Resolution: Cannot Reproduce

 Running commands from the hadoop-mapreduce-test-*.jar fails with  
 ClassNotFoundException: junit.framework.TestCase
 --

 Key: MAPREDUCE-2972
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2972
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Reporter: Jeffrey Naisbitt
Assignee: Jeffrey Naisbitt
Priority: Minor

 Running any of the 'hadoop jar hadoop-mapreduce-test-*.jar' commands gives 
 the following exception:
 java.lang.NoClassDefFoundError: junit/framework/TestCase
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
   at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:300)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
   at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
   at 
 org.apache.hadoop.test.MapredTestDriver.init(MapredTestDriver.java:59)
   at 
 org.apache.hadoop.test.MapredTestDriver.init(MapredTestDriver.java:53)
   at 
 org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:118)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:189)
 Caused by: java.lang.ClassNotFoundException: junit.framework.TestCase
   at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
   at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
   ... 21 more
 This happens even when just running 'hadoop jar $TEST_JAR' where it should 
 just print the available commands.
 Copying the junit-*.jar from $HADOOP_MAPRED_HOME/lib/ to 
 $HADOOP_COMMON_HOME/share/hadoop/common/lib/ seems to fix the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3588) bin/yarn broken after MAPREDUCE-3366

2011-12-20 Thread Arun C Murthy (Created) (JIRA)
bin/yarn broken after MAPREDUCE-3366


 Key: MAPREDUCE-3588
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3588
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.23.1
Reporter: Arun C Murthy
Assignee: Arun C Murthy
Priority: Blocker
 Fix For: 0.23.1


bin/yarn broken after MAPREDUCE-3366, doesn't add yarn jars to classpath. As a 
result no servers can be started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: JAXB / Guice errors

2011-12-20 Thread Vinod Kumar Vavilapalli
We should definitely add this to README/INSTALL/wiki. Can you please do
that?

Thanks,
+Vinod

On Tue, Dec 20, 2011 at 11:57 AM, Ravi Prakash ravihad...@gmail.com wrote:

 Hi Vinod,

 I solved my issue. I had a stale version of java pointed to by JAVA_HOME.
 $ ./jdk1.6.0_01/bin/java -version
 java version 1.6.0_01
 Java(TM) SE Runtime Environment (build 1.6.0_01-b06)
 Java HotSpot(TM) Server VM (build 1.6.0_01-b06, mixed mode)

 Updating to $ ./jdk1.6.0_30/bin/java -version
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) Server VM (build 20.5-b03, mixed mode)

 fixed the problem

 Thanks
 Ravi


 On Tue, Dec 20, 2011 at 12:13 PM, Vinod Kumar Vavilapalli 
 vino...@hortonworks.com wrote:

  Can you please open a ticket? It must be related to MAPREDUCE-2863 .
 Thomas
  can help with this.
 
  Thanks,
  +Vinod
 
 
  On Tue, Dec 20, 2011 at 10:09 AM, Ravi Prakash ravihad...@gmail.com
  wrote:
 
   Hi,
  
   Is anyone seeing these errors when they try to access the RM Web UI?
  
   HTTP ERROR 500
  
   Problem accessing /. Reason:
  
  Guice provision errors:
  
   1) Error injecting constructor, java.lang.LinkageError: JAXB 2.1 API is
   being loaded from the bootstrap classloader, but this RI (from
  
  
 
 jar:file:somePath/hadoop-0.23.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class)
   needs 2.2 API. Use the endorsed directory mechanism to place
 jaxb-api.jar
   in the bootstrap classloader. (See
   http://java.sun.com/j2se/1.6.0/docs/guide/standards/)
at
  
  
 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver.init(JAXBContextResolver.java:60)
at
  
  
 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebApp.setup(RMWebApp.java:45)
while locating
  
 org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver
  
   1 error
  
   Anyone fix it yet?
  
   Cheers
   Ravi.
  
 



[jira] [Resolved] (MAPREDUCE-3515) hadoop 0.23: native compression libraries not being loaded

2011-12-20 Thread Wing Yew Poon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wing Yew Poon resolved MAPREDUCE-3515.
--

Resolution: Duplicate

 hadoop 0.23: native compression libraries not being loaded
 --

 Key: MAPREDUCE-3515
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3515
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Wing Yew Poon

 I installed the hadoop package from the Bigtop hadoop 0.23 branch. Among 
 other files, the package installs
 /usr/lib/hadoop/lib/native
 /usr/lib/hadoop/lib/native/libhadoop.a
 /usr/lib/hadoop/lib/native/libhadoop.so.1
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
 /usr/lib/hadoop/lib/native/libhdfs.a
 I ran a simple job using compression:
 hadoop jar /usr/lib/hadoop/hadoop-mapreduce-examples.jar wordcount -D 
 mapreduce.output.fileoutputformat.compress=true -D 
 mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
  examples/text wordcount-gz
 I see
 11/12/06 13:42:06 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 11/12/06 13:42:06 WARN snappy.LoadSnappy: Snappy native library not loaded
 and at the end, I see
 -rw-r--r--   1 root supergroup  0 2011-12-06 13:42 
 wordcount-gz/_SUCCESS
 -rw-r--r--   1 root supergroup  46228 2011-12-06 13:42 
 wordcount-gz/part-r-0.gz
 so the output is compressed, but from the log message, I assume that the 
 native library is not being loaded and that the java gzip is being used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




proper way to run TestSaslRPC

2011-12-20 Thread Ted Yu
Hi,
In 0.20.205, I used this command:
ant test-core

I saw:

Testcase: testDigestAuthMethodHostBasedToken took 0.026 sec
  Caused an ERROR
failure to login
java.io.IOException: failure to login
  at
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:452)
  at
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:414)
  at
org.apache.hadoop.ipc.TestSaslRPC.testDigestAuthMethod(TestSaslRPC.java:366)
  at
org.apache.hadoop.ipc.TestSaslRPC.testDigestAuthMethodHostBasedToken(TestSaslRPC.java:414)
Caused by: javax.security.auth.login.LoginException:
java.lang.IllegalArgumentException: Illegal principal name zhi...@x.com
  at org.apache.hadoop.security.User.init(User.java:46)
  at org.apache.hadoop.security.User.init(User.java:39)
  at
org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:123)
  at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769)
  at
javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)
  at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706)
  at java.security.AccessController.doPrivileged(Native Method)
  at
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703)
  at javax.security.auth.login.LoginContext.login(LoginContext.java:576)
  at
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:433)
  at
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:414)
  at
org.apache.hadoop.ipc.TestSaslRPC.testDigestAuthMethod(TestSaslRPC.java:366)
  at
org.apache.hadoop.ipc.TestSaslRPC.testDigestAuthMethodHostBasedToken(TestSaslRPC.java:414)
Caused by: org.apache.hadoop.security.KerberosName$NoMatchingRule: No rules
applied to zhi...@x.com
  at
org.apache.hadoop.security.KerberosName.getShortName(KerberosName.java:394)
  at org.apache.hadoop.security.User.init(User.java:44)

  at javax.security.auth.login.LoginContext.invoke(LoginContext.java:872)
  at
javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)
  at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706)
  at java.security.AccessController.doPrivileged(Native Method)
  at
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703)
  at javax.security.auth.login.LoginContext.login(LoginContext.java:576)
  at
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:433)

What would be the proper way of running TestSaslRPC ?

Thanks


Re: MapReduce and MPI

2011-12-20 Thread Ralph Castain
Just a quick update on this notion. Several of us in the OMPI community got 
together and successfully integrated Java bindings into the OMPI code base, and 
we have enough support that we can probably get this approved within that 
organization. I've written a wrapper compiler and added support within mpirun 
to make it relatively easy to use, so what remains is documentation (hope to 
have an initial cut at that done on Wed) and extending coverage to all MPI 
functions (we have send/recv and a number of other basic things done, but still 
need collectives and MPI-2 dynamics). The latter will be a work-in-progress 
(there are a LOT of MPI functions), with the more common functions covered over 
the next few weeks.

We also need test codes, of course, and could use help with generating those 
plus actual testing.  Volunteers are welcome. There are several Fortran and C 
test suites out there that are rather extensive - having some subset of those 
in Java would be a major step forward. I can point you to the branch where this 
work is being done (it is public, with controlled write privileges) and provide 
example tests on request.

As for the 3.0 standard, that is indeed out-of-reach. The MPI Forum requires 9 
months lead time for approval of any new proposal, and the 3.0 approval meeting 
is in Jan. However, this is a continuous process with revisions being released 
on a quarterly basis. So it isn't a hit the date or die issue - it is 
strictly a question of persevering long enough to gain acceptance, and the pace 
of the process will largely be driven by the level of user interest.

HTH
Ralph

On Dec 1, 2011, at 3:15 PM, Ralph Castain wrote:

 
 On Dec 1, 2011, at 2:47 PM, milind.bhandar...@emc.com wrote:
 
 Ralph,
 
 At the MPI Forum meeting at SC11, Jeff mentioned that C++ bindings are
 going to be dropped from the standard,
 
 Yes - reason being mostly that (a) very few applications use them, and (b) 
 they have proven to be more trouble than they are worth. We are constantly 
 finding bugs due to conflicts between MPI specifications and C++ compilers, 
 and (quite frankly) the lack of experienced C++ programmers in the MPI 
 developer community is a serious problem. So keeping those bindings alive is 
 difficult.
 
 and that no other language bindings
 were proposed. Do you think there is enough time for Java bindings to make
 it into the 3.0 standard ?
 
 I don't know about the 3.0 standard -  could happen, if I can do it fast 
 enough and the Forum accepts it for that release, or may have to follow in 
 3.1. The obstacle we have to overcome re the Forum is that Java got a bad 
 name in the early years of the binding attempts due to performance issues and 
 lack of attention to details. The performance problem largely stemmed from 
 the issue of binding processes to at least NUMA regions - the C 
 implementations were far faster - and the poor performance of Java in general 
 during that time. The latter has largely been resolved over the years, and 
 the former is solvable with some work.
 
 The detail issue reflected the problem of trying to create a single, 
 non-sectarian set of Java bindings that fit all MPI implementations. This 
 meant that you could really only cover 90% of MPI functionality - beyond 
 that, you have to integrate tightly to the implementation. The academics who 
 did the original work didn't want to do so, and thus left functions out, 
 resulting in the MPI community looking down on the result.
 
 All put together, the MPI community wound up not thinking much of the Java 
 world. As I said, things have changed, and I believe a high-quality 
 implementation of Java bindings can gain acceptance. Once we have it for one 
 MPI, we can (due to the OMPI license) offer it up to the other 
 implementations with a fair degree of confidence they will adopt it.
 
 As for the MPI Forum, what we need is a champion to propose adoption of the 
 bindings once implemented. If people want them (i.e., the user community is 
 larger than C++, which has a total of 3 identified applications), we can show 
 the implementation is of quality, and we have developers willing to support 
 it, then we can get them adopted.
 
 I've scoped the job and it looks doable with reasonable effort. One other 
 person on the list (Deepak Sharma) has offered to help, and Jeff has offered 
 to provide advice as he wrote the original OMPI bindings. Getting it thru the 
 OMPI devel approval represents a miniature MPI Forum process, but I think we 
 can do it given Jeff and my roles there.
 
 HTH
 Ralph
 
 
 - Milind
 
 On 12/1/11 3:31 AM, Ralph Castain r...@open-mpi.org wrote:
 
 Hi folks
 
 I'm a lead developer on the Open MPI project, and recently joined the
 Hadoop community to help with Hamster. A couple of people have asked me
 about using MPI more generally inside MapReduce, and it does indeed seem
 a good candidate to use that method of communication.
 
 It seems to me, though, that a pre-requisite for 

Re: MapReduce and MPI

2011-12-20 Thread Arun Murthy
Sounds great! Thanks for the update Ralph!

Sent from my iPhone

On Dec 20, 2011, at 10:22 PM, Ralph Castain r...@open-mpi.org wrote:

 Just a quick update on this notion. Several of us in the OMPI community got 
 together and successfully integrated Java bindings into the OMPI code base, 
 and we have enough support that we can probably get this approved within that 
 organization. I've written a wrapper compiler and added support within mpirun 
 to make it relatively easy to use, so what remains is documentation (hope to 
 have an initial cut at that done on Wed) and extending coverage to all MPI 
 functions (we have send/recv and a number of other basic things done, but 
 still need collectives and MPI-2 dynamics). The latter will be a 
 work-in-progress (there are a LOT of MPI functions), with the more common 
 functions covered over the next few weeks.

 We also need test codes, of course, and could use help with generating those 
 plus actual testing.  Volunteers are welcome. There are several Fortran and C 
 test suites out there that are rather extensive - having some subset of those 
 in Java would be a major step forward. I can point you to the branch where 
 this work is being done (it is public, with controlled write privileges) and 
 provide example tests on request.

 As for the 3.0 standard, that is indeed out-of-reach. The MPI Forum requires 
 9 months lead time for approval of any new proposal, and the 3.0 approval 
 meeting is in Jan. However, this is a continuous process with revisions being 
 released on a quarterly basis. So it isn't a hit the date or die issue - it 
 is strictly a question of persevering long enough to gain acceptance, and the 
 pace of the process will largely be driven by the level of user interest.

 HTH
 Ralph

 On Dec 1, 2011, at 3:15 PM, Ralph Castain wrote:


 On Dec 1, 2011, at 2:47 PM, milind.bhandar...@emc.com wrote:

 Ralph,

 At the MPI Forum meeting at SC11, Jeff mentioned that C++ bindings are
 going to be dropped from the standard,

 Yes - reason being mostly that (a) very few applications use them, and (b) 
 they have proven to be more trouble than they are worth. We are constantly 
 finding bugs due to conflicts between MPI specifications and C++ compilers, 
 and (quite frankly) the lack of experienced C++ programmers in the MPI 
 developer community is a serious problem. So keeping those bindings alive is 
 difficult.

 and that no other language bindings
 were proposed. Do you think there is enough time for Java bindings to make
 it into the 3.0 standard ?

 I don't know about the 3.0 standard -  could happen, if I can do it fast 
 enough and the Forum accepts it for that release, or may have to follow in 
 3.1. The obstacle we have to overcome re the Forum is that Java got a bad 
 name in the early years of the binding attempts due to performance issues 
 and lack of attention to details. The performance problem largely stemmed 
 from the issue of binding processes to at least NUMA regions - the C 
 implementations were far faster - and the poor performance of Java in 
 general during that time. The latter has largely been resolved over the 
 years, and the former is solvable with some work.

 The detail issue reflected the problem of trying to create a single, 
 non-sectarian set of Java bindings that fit all MPI implementations. This 
 meant that you could really only cover 90% of MPI functionality - beyond 
 that, you have to integrate tightly to the implementation. The academics who 
 did the original work didn't want to do so, and thus left functions out, 
 resulting in the MPI community looking down on the result.

 All put together, the MPI community wound up not thinking much of the Java 
 world. As I said, things have changed, and I believe a high-quality 
 implementation of Java bindings can gain acceptance. Once we have it for one 
 MPI, we can (due to the OMPI license) offer it up to the other 
 implementations with a fair degree of confidence they will adopt it.

 As for the MPI Forum, what we need is a champion to propose adoption of 
 the bindings once implemented. If people want them (i.e., the user community 
 is larger than C++, which has a total of 3 identified applications), we can 
 show the implementation is of quality, and we have developers willing to 
 support it, then we can get them adopted.

 I've scoped the job and it looks doable with reasonable effort. One other 
 person on the list (Deepak Sharma) has offered to help, and Jeff has offered 
 to provide advice as he wrote the original OMPI bindings. Getting it thru 
 the OMPI devel approval represents a miniature MPI Forum process, but I 
 think we can do it given Jeff and my roles there.

 HTH
 Ralph


 - Milind

 On 12/1/11 3:31 AM, Ralph Castain r...@open-mpi.org wrote:

 Hi folks

 I'm a lead developer on the Open MPI project, and recently joined the
 Hadoop community to help with Hamster. A couple of people have asked me
 about using MPI more generally inside