Hi Ted,
Yes, I mean same region. 

I wasn't using the getSplits() function. I'm trying to add it to my code
but I'm not sure how I have to do it. Is there any example in the website? 
I can not find anything. (By the way, I'm using TableInputFormat, not 
InputFormat)

But just to confirm, with the getSplits() function, Are mappers processing
rows in the same region executed in parallel? (assuming that there are empty
processors/cores)

Thanks,
Ivan.


----- Mensaje original -----
> De: "Ted Yu" <yuzhih...@gmail.com>
> Para: user@hbase.apache.org
> Enviados: Lunes, 11 de Abril 2016 15:10:29
> Asunto: Re: Processing rows in parallel with MapReduce jobs.
> 
> bq. if they are located in the same split?
> 
> Probably you meant same region.
> 
> Can you show the getSplits() for the InputFormat of your MapReduce job ?
> 
> Thanks
> 
> On Mon, Apr 11, 2016 at 5:48 AM, Ivan Cores gonzalez <ivan.co...@inria.fr>
> wrote:
> 
> > Hi all,
> >
> > I have a small question regarding the MapReduce jobs behaviour with HBase.
> >
> > I have a HBase test table with only 8 rows. I splitted the table with the
> > hbase shell
> > split command into 2 splits. So now there are 4 rows in every split.
> >
> > I create a MapReduce job that only prints the row key in the log files.
> > When I run the MapReduce job, every row is processed by 1 mapper. But the
> > mappers
> > in the same split are executed sequentially (inside the same container).
> > That means,
> > the first four rows are processed sequentially by 4 mappers. The system
> > has cores
> > that are free, so is it possible to process rows in parallel if they are
> > located
> > in the same split?
> >
> > The only way I found to have 8 mappers executed in parallel is split the
> > table
> > in 8 splits (1 split per row). But obviously this is not the best solution
> > for
> > big tables ...
> >
> > Thanks,
> > Ivan.
> >
> 

Reply via email to