The input for a M/R job consists of multiple files that are less than a
block size and the number of maps is the number of files.
I would like to be able to control the number of maps in a way that I have
one map task for multiple files (for example, gluing together files up to a
block size).
I d
Thanks - it certainly helps!
Ken
Arun C Murthy wrote:
>
> Hi Ken,
>
> On Sat, Oct 06, 2007 at 08:54:54PM -0700, Ken Pu wrote:
>>
>>Hi,
>>
>>As a beginner of Hadoop, I wonder how to send output key-value pairs of
the
>>reducers back to the input of mappers for iterative processing.
>>
>
>
Hi,
Thanks for your responses. As all the solutions were good to me, I hv
decided to use the "map.input.file" from jobconf.
Thanks & Regards,
Shaile..
On 10/12/07, Ted Dunning <[EMAIL PROTECTED]> wrote:
>
>
>
> It is also pretty easy to over-ride bits of TextInputFormat to give the
> file
> as t
On Sun, Oct 14, 2007 at 08:26:27PM -0700, cpreethi wrote:
>
>Hi,
>
>This is the error that i get.Kindly look into it and suggest me as how I
>should go about this.
>Thanks in advance
>
>[EMAIL PROTECTED] bin]$ ./hadoop jar
>/home/jaya/hadoop-0.13.0/hadoop-0.13.0-examples.jar wordcount input output
Hi,
This is the error that i get.Kindly look into it and suggest me as how I
should go about this.
Thanks in advance
[EMAIL PROTECTED] bin]$ ./hadoop jar
/home/jaya/hadoop-0.13.0/hadoop-0.13.0-examples.jar wordcount input output
java.net.SocketTimeoutException: timed out waiting for rpc response