Re: :How to speed up of Map/Reduce job?

2011-02-03 Thread madhu phatak
Most of the Hadoop uses includes processing of large data. But in real time applications , the data provided by user will be relatively small ,in which its not advised to use Hadoop On Tue, Feb 1, 2011 at 10:01 PM, Black, Michael (IS) wrote: > Try this rather small C++ program...it will more than

RE::How to speed up of Map/Reduce job?

2011-02-01 Thread Black, Michael (IS)
Try this rather small C++ program...it will more than likley be a LOT faster than anything you could do in hadoop. Hadoop is not the hammer for every nail. Too many people think that any "cluster" solution will automagically scale their problem...tain't true. I'd appreciate hearing your resul

Re: How to speed up of Map/Reduce job?

2011-02-01 Thread Steve Loughran
On 01/02/11 08:19, Igor Bubkin wrote: Hello everybody I have a problem. I installed Hadoop on 2-nodes cluster and run Wordcount example. It takes about 20 sec for processing of 1,5MB text file. We want to use Map/Reduce in real time (interactive: by user's requests). User can't wait for his requ

RE: How to speed up of Map/Reduce job?

2011-02-01 Thread praveen.peddi
Hi Igor, I am not sure if Hadoop is designed for realtime requests. I have a feeling that you are trying to use Hadoop in a way that it isnot designed for. From my experience, Hadoop cluster will be much slower than "local" hadoop mode when processing smaller dataset, because there is always ext

Re: How to speed up of Map/Reduce job?

2011-02-01 Thread li ping
The Hadoop is designed for no-real time application. But You can change the parameter to reduce the job execution time. I search an article in Google. Hope You can find some useful information on that. http://www.slideshare.net/ImpetusInfo/ppt-on-advanced-hadoop-tuning-n-optimisation On Tue, Feb