Hi Guillermo,

You can use the TableOutputFormat as the output format for your job, then on 
your reduce, you just need to write Put objects. 

On your driver:

Job job = new Job(conf);
…
job.setOutputFormatClass(TableOutputFormatClass);
job.setReducerClass(AverageReducer.class);
job.setOutputFormatClass(TableOutputFormat.class);
job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "table");
job.setOutputKeyClass(ImmutableBytesWritable.class); 
job.setOutputValueClass(Writable.class);
...

On your reducer, just create related puts and write it:

Put put = new Put();
ImmutableBytesWritable key = new ImmutableBytesWritable();
...
context.write(key, put);


Cheers,
Wellington.


On 26 Jun 2014, at 16:24, Guillermo Ortiz <konstt2...@gmail.com> wrote:

> I have a question.
> I want to execute an MapReduce and the output of my reduce it's going to
> store in HBase.
> 
> So, it's a MapReduce with an output which it's going to be stored in HBase.
> I can do a Map and use HFileOutputFormat.configureIncrementalLoad(pJob,
> table); but, I don't know how I could do it if I have a Reduce as well,,
> since the configureIncrementalLoad generates an reduce.

Reply via email to