There is a tableInputFormat class in 
org.apache.hadoop.hbase.mapreduce.TableInputFormat

Also, if you want to use TableMapReduceUtil you probably want to have your 
mapper function extend TableMapper.

Check out the javadocs for more info: 
http://hadoop.apache.org/hbase/docs/current/api/index.html



-----Original Message-----
From: Something Something [mailto:[email protected]] 
Sent: Thursday, October 15, 2009 1:37 AM
To: [email protected]; [email protected]
Subject: Re: Question about MapReduce

If the answer is...

TableMapReduceUtil.initTableMapperJob

I apologize for the spam.  If this isn't the right way, please let me know.  
Thanks.


--- On Wed, 10/14/09, Something Something <[email protected]> wrote:

From: Something Something <[email protected]>
Subject: Question about MapReduce
To: [email protected], [email protected]
Date: Wednesday, October 14, 2009, 10:18 PM

I would like to start a Map-Reduce job that does not read data from an input 
file or from a database.  I would like to pass 3 arguments to the Mapper & 
Reducer to work on.  Basically, these arguments are keys on the 3 different 
tables on HBase.

In other words, I don't want to use FileInputFormat or DbInputFormat because 
everything I need is already on HBase. 

How can I do this?  Please let me know.  Thanks.




      


      

Reply via email to