Yep.  Thanks,  I didn't think to look there :).  I do have another issue when
running my own map/reduce.  I've got the jar files in HADOOP_CLASSPATH as i
mentioned above, but when i run the job below is what i get.  The
ClassDefNotFoundError happens on TableSplit multiple times. 
================================================

hadoop19/bin/hadoop ColumnCountMapReduce inputTableName
09/06/09 10:26:08 WARN mapred.JobClient: No job jar file set.  User classes
may not be found. See JobConf(Class) or JobConf#setJar(String).
09/06/09 10:26:11 INFO mapred.JobClient: Running job: job_200906090918_0005
09/06/09 10:26:12 INFO mapred.JobClient:  map 0% reduce 0%
09/06/09 10:26:21 INFO mapred.JobClient: Task Id :
attempt_200906090918_0005_m_000000_0, Status : FAILED
java.io.IOException: Split class org.apache.hadoop.hbase.mapred.TableSplit
not found
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:314)
        at org.apache.hadoop.mapred.Child.main(Child.java:158)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.mapred.TableSplit
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:673)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:311)
        ... 1 more

09/06/09 10:26:21 INFO mapred.JobClient: Task Id :
attempt_200906090918_0005_m_000004_0, Status : FAILED
java.io.IOException: Split class org.apache.hadoop.hbase.mapred.TableSplit
not found
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:314)
        at org.apache.hadoop.mapred.Child.main(Child.java:158)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.mapred.TableSplit
        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:247)
        at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:673)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:311)
        ... 1 more


stack-3 wrote:
> 
> It looks pretty obvious what your issue is.
> 
> My guess is that your child process needs its heap size bumped up.
> 
> Look for the child parameters configuration in your hadoop-default.xml (or
> your mapred-default.xml if 0.20.x hadoop).
> 
> St.Ack
> 
> On Tue, Jun 9, 2009 at 10:05 AM, llpind <sonny_h...@hotmail.com> wrote:
> 
>>
>> Hmm...jobtracker UI has the following:
>>
>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>> contact
>> region server 192.168.0.195:60020 for region TestTable,,1244499094604,
>> row
>> '0000010477', but failed after 10 attempts.
>> Exceptions:
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>>
>>        at
>>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:859)
>>        at
>>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:951)
>>        at
>> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1397)
>>        at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1341)
>>        at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1321)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest.testRow(PerformanceEvaluation.java:515)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:386)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:583)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:182)
>>        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:158)
>>
>> =================================================
>>
>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>> contact
>> region server 192.168.0.195:60020 for region TestTable,,1244499094604,
>> row
>> '0002107629', but failed after 10 attempts.
>> Exceptions:
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>>
>>        at
>>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:859)
>>        at
>>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:951)
>>        at
>> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1397)
>>        at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1341)
>>        at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1321)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest.testRow(PerformanceEvaluation.java:515)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:386)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:583)
>>        at
>>
>> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:182)
>>        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:158)
>>
>>
>>
>>
>> stack-3 wrote:
>> >
>> > Look at why the job failed in the jobtracker UI -- usually on port
>> 50030.
>> > Looks like your job launched fine.   You have the conf dir on the
>> > HADOOP_CLASSPATH so the MR job can find hbase?
>> > St.Ack
>> >
>> > On Tue, Jun 9, 2009 at 9:25 AM, llpind <sonny_h...@hotmail.com> wrote:
>> >
>> >>
>> >> Hey all,
>> >>
>> >> getting started with map/reduce jobs.  Figured I'd try the
>> >> PerformanceEvaluation test program first.
>> >>
>> >> After adding HBase jar files to HADOOP_CLASSPATH in hadoop-env.sh, I
>> >> issue
>> >> the following command:
>> >>
>> >> hadoop19/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation
>> >> sequentialWrite 4
>> >>
>> >> and get the following output:
>> >>
>> >> 09/06/09 09:20:54 WARN mapred.JobClient: Use GenericOptionsParser for
>> >> parsing the arguments. Applications should implement Tool for the
>> same.
>> >> 09/06/09 09:20:55 INFO mapred.FileInputFormat: Total input paths to
>> >> process
>> >> : 1
>> >> 09/06/09 09:20:56 INFO mapred.JobClient: Running job:
>> >> job_200906090918_0001
>> >> 09/06/09 09:20:57 INFO mapred.JobClient:  map 0% reduce 0%
>> >> java.io.IOException: Job failed!
>> >>        at
>> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hbase.PerformanceEvaluation.doMapReduce(PerformanceEvaluation.java:293)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hbase.PerformanceEvaluation.runNIsMoreThanOne(PerformanceEvaluation.java:221)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hbase.PerformanceEvaluation.runTest(PerformanceEvaluation.java:639)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hbase.PerformanceEvaluation.doCommandLine(PerformanceEvaluation.java:748)
>> >>        at
>> >>
>> >>
>> org.apache.hadoop.hbase.PerformanceEvaluation.main(PerformanceEvaluation.java:768)
>> >>
>> >>
>> >> What am I doing wrong...?
>> >>
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> http://www.nabble.com/PerformanceEvaluation-test-tp23946437p23946437.html
>> >> Sent from the HBase User mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/PerformanceEvaluation-test-tp23946437p23947225.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/PerformanceEvaluation-test-tp23946437p23947629.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to