I am doing something similar where I have a million input files to process,
I provide a single text file as input and use NLineInputFormat, specifying
the number of lines that are sent to each map task.

In your case you would set it so that 1 line was one input split, which is
the default.

In pre 20, the parameter is mapred.line.input.format.linespermap.

On Wed, Oct 14, 2009 at 10:36 PM, Something Something <
luckyguy2...@yahoo.com> wrote:

> If the answer is...
>
> TableMapReduceUtil.initTableMapperJob
>
> I apologize for the spam.  If this isn't the right way, please let me know.
>  Thanks.
>
>
> --- On Wed, 10/14/09, Something Something <luckyguy2...@yahoo.com> wrote:
>
> From: Something Something <luckyguy2...@yahoo.com>
> Subject: Question about MapReduce
> To: general@hadoop.apache.org, hbase-u...@hadoop.apache.org
> Date: Wednesday, October 14, 2009, 10:18 PM
>
> I would like to start a Map-Reduce job that does not read data from an
> input file or from a database.  I would like to pass 3 arguments to the
> Mapper & Reducer to work on.  Basically, these arguments are keys on the 3
> different tables on HBase.
>
> In other words, I don't want to use FileInputFormat or DbInputFormat
> because everything I need is already on HBase.
>
> How can I do this?  Please let me know.  Thanks.
>
>
>
>
>
>
>
>
>



-- 
Pro Hadoop, a book to guide you from beginner to hadoop mastery,
http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to