You might make use of the Hadoop scheduler and task management to
initiate the jobs, and writing the results back to the hadoop
filesystem but I would guess there are better ways of doing this than
using hadoop just for this scheduling (perhaps a simple web service on
each machine through which you can remotely trigger the processing?).
I am by no means a Hadoop expert though.

Cheers,

Tim




On Thu, Mar 19, 2009 at 7:48 PM, John Bergstrom <hillstr...@gmail.com> wrote:
> Hi,
>
> Can anyone tell me if Hadoop is appropriate for the following application.
>
> I need to perform optimization using a single, small input data set.
> To get a good result I must make many independent runs of the
> optimizer, where each run is initiated with a different starting
> point. At completion, I just choose the best solution from all of the
> runs. So my problem is not that I'm working with big data, I just want
> to speed up my run time by linking several Ubuntu desktops that are
> available to me. The optimizer is written in ANSI C.
>
> Thanks,
>
> John
>

Reply via email to