More importantly: have you told Hadoop to use all your cores?

What is mapred.tasktracker.map.tasks.maximum set to? This defaults to 2. If
you've got 16 cores/node, you should set this to at least 15--16 so that all
your cores are being used. You may need to set this higher, like 20, to
ensure that cores aren't being starved. Measure with ganglia or top to make
sure your CPU utilization is up to where you're satisfied. (Note: this is a
tasktracker setting, not a job setting. you'll need to set this on every
node, then restart the mapreduce cluster to take effect.)

Of course, you need to have enough RAM to make sure that all these tasks can
run concurrently without swapping. Swapping will destroy your performance.
Then again, if you bought 16-way machines, presumably you didn't cheap out
in that department :)

100 tasks is not an absurd number. For large data sets (e.g., TB scale), I
have seen several tens of thousands of tasks.

In general, yes, running many tasks over small files is not a good fit for
Hadoop, but 100 is not "many small files" -- you might see some sort of
speed up by coalescing multiple files into a single task, but when you hear
problems with processing many small files, folks are frequently referring to
something like 10,000 files where each file is only a few MB, and the actual
processing per record is extremely cheap. In cases like this, task startup
times severely dominate actual computation time. If your individual records
require around a minute each to process as you claimed earlier, you're
nowhere near in danger of hitting that particular performance bottleneck.

- Aaron


On Thu, Nov 26, 2009 at 12:23 PM, CubicDesign <cubicdes...@gmail.com> wrote:

>
>
>  Are the record processing steps bound by a local machine resource - cpu,
>> disk io or other?
>>
>>
> Some disk I/O. Not so much compared with the CPU. Basically it is a CPU
> bound. This is why each machine has 16 cores.
>
>  What I often do when I have lots of small files to handle is use the
>> NlineInputFormat,
>>
> Each file contains a complete/independent set of records. I cannot mix the
> data resulted from processing two different files.
>
>
> ---------
> Ok. I think I need to re-explain my problem :)
> While running jobs on these small files, the computation time was almost 5
> times longer than expected. It looks like the job was affected by the number
> of map task that I have (100). I don't know which are the best parameters in
> my case (10MB files).
>
> I have zero reduce tasks.
>
>
>

Reply via email to