On Wed, Apr 10, 2013 at 6:54 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:

> Hi Nitin,
>
> You got my question correctly.
>
> However, I'm wondering how it's working when it's done into HBase.



We use the default MapReduce partitioner:
http://hadoop.apache.org/docs/r2.0.3-alpha/api/org/apache/hadoop/mapreduce/lib/partition/HashPartitioner.html



> Do we have defaults partionners ...



No.



> ...so we have the same garantee that records
> mapping to one key go to the same reducer.



This will happen w/ the default partitioner (key is hashed.  hash is always
the same so always goes to same location).



> Or do we have to implement
> this one our own.
>

No.

St.Ack

Reply via email to