Strange, that I've last night tried 10000 input files (maps), waiting time
after maps increases (probably linearly)

2009/3/2 Rasit OZDAS <rasitoz...@gmail.com>

> I have 6 reducers, Nick, still no luck..
>
> 2009/3/2 Nick Cen <cenyo...@gmail.com>
>
> how many reducer do you have? You should make this value larger then 1 to
>> make mapper and reducer run concurrently. You can set this value from
>> JobConf.*setNumReduceTasks*().
>>
>>
>> 2009/3/2 Rasit OZDAS <rasitoz...@gmail.com>
>>
>> > Hi!
>> >
>> > Whatever code I run on hadoop, reduce starts a few seconds after map
>> > finishes.
>> > And worse, when I run 10 jobs parallely (using threads and sending one
>> > after
>> > another)
>> > all maps finish sequentially, then after 8-10 seconds reduces start.
>> > I use reducer also as combiner, my cluster has 6 machines, namenode and
>> > jobtracker run also as slaves.
>> > There were 44 maps and 6 reduces in the last example, I never tried a
>> > bigger
>> > job.
>> >
>> > What can the problem be? I've read somewhere that this is not the normal
>> > behaviour.
>> > Replication factor is 3.
>> > Thank you in advance for any pointers.
>> >
>> > Rasit
>> >
>>
>>
>>
>> --
>> http://daily.appspot.com/food/
>>
>
>
>
> --
> M. Raşit ÖZDAŞ
>



-- 
M. Raşit ÖZDAŞ

Reply via email to