Hi

You need to alter the value of mapred.max.split size to a value larger than
your block size to have less number of map tasks than the default.

On Tue, Oct 2, 2012 at 10:04 PM, Shing Hing Man <mat...@yahoo.com> wrote:

>
>
>
> I am running Hadoop 1.0.3 in Pseudo  distributed mode.
> When I  submit a map/reduce job to process a file of  size about 16 GB, in
> job.xml, I have the following
>
>
> mapred.map.tasks =242
> mapred.min.split.size =0
> dfs.block.size = 67108864
>
>
> I would like to reduce   mapred.map.tasks to see if it improves
> performance.
> I have tried doubling  the size of  dfs.block.size. But
> the    mapred.map.tasks remains unchanged.
> Is there a way to reduce  mapred.map.tasks  ?
>
>
> Thanks in advance for any assistance !
> Shing
>
>

Reply via email to