Have you considered change mapred.max.split.size ?
As in:
http://stackoverflow.com/questions/9678180/change-file-split-size-in-hadoop

Zheyi

On Thu, Mar 14, 2013 at 3:27 PM, YouPeng Yang <yypvsxf19870...@gmail.com>wrote:

> Hi
>
>
>   I have done some tests in my  Pseudo Mode(CDH4.1.2)with MV2 yarn,and   :
>   According to the doc:
>   *mapreduce.jobtracker.address :*The host and port that the MapReduce
> job tracker runs at. If "local", then jobs are run in-process as a single
> map and reduce task.
>   *mapreduce.job.maps (default value is 2)* :The default number of map
> tasks per job. Ignored when mapreduce.jobtracker.address is "local".
>
>   I changed the mapreduce.jobtracker.address = Hadoop:50031.
>
>   And then run the wordcount examples:
>   hadoop jar  hadoop-mapreduce-examples-2.0.0-cdh4.1.2.jar wordcount
> input output
>
>   the output logs are as follows:
>         ....
>    Job Counters
> Launched map tasks=1
>  Launched reduce tasks=1
> Data-local map tasks=1
>  Total time spent by all maps in occupied slots (ms)=60336
> Total time spent by all reduces in occupied slots (ms)=63264
>      Map-Reduce Framework
> Map input records=5
>  Map output records=7
> Map output bytes=56
> Map output materialized bytes=76
>         ....
>
>  i seem to does not work.
>
>  I thought maybe my input file is small-just 5 records . is it right?
>
> regards
>
>
>
>
>
>
>
> 2013/3/14 Sai Sai <saigr...@yahoo.in>
>
>>
>>
>>  In Pseudo Mode where is the setting to increase the number of mappers or
>> is this not possible.
>> Thanks
>> Sai
>>
>
>

Reply via email to