Perhaps. I figured it would be easier to go mapreduce -> hbase instead of mapreduce -> output file in hdfs -> load output file into hbase as a separate job. Then again, for performance, maybe hdfs -> hbase is better than mapreduce -> hbase and I should just plan to do that instead.

On Feb 11, 2010, at 2:27 PM, Stack wrote:

On Thu, Feb 11, 2010 at 12:38 PM, David Hawthorne <[email protected]> wrote:
I was under the impression that you could read from/write to an hbase table
from within a mapreduce job.  Import and Export look like methods for
reading HDFS files into hbase and dumping hbase into an HDFS file.


Yes.  Isn't that what you want?  Export shows how to use hbase as a
mapreduce source and Import as a mapreduce sink.
St.Ack



On Feb 11, 2010, at 12:25 PM, Guohua Hao wrote:

Hello there,

Did  you take a look at the Import and Export classes under package
org.apache.hadoop.hbase.mapreduce? They are mostly using new APIs in my
mind. Correct me if I am wrong.

Thanks,
Guohua

On Thu, Feb 11, 2010 at 2:13 PM, David Hawthorne <[email protected]>
wrote:

I'm looking for some examples for reading data out of hbase for use with mapreduce and for inserting data into hbase from a mapreduce job. I've
seen
the example shipped with hbase, and, well, it doesn't exactly make things click for me. It also looks like it's using the old API, so maybe that's
why.

Can someone please send some example code for reading/writing from/to
hbase
with a mapreduce job?  The more examples the better.

Thanks!




Reply via email to