My feeling is that JavaSpaces could be a good choice. Here is my plan:

   - Have one machine running JavaSpaces (using GigaSpaces free community
   version), put the data in there, with a small object to keep the staring
   point;
   - Each worker machine reads the Space (all workers can read at the same
   time, no lock), and also updates the starting point object (get it from the
   Space, update, put it back) - this is a locking operation, but fact;
   - Results go back into the Space.

Interesting what you will do in the end.

Mark

On Thu, Mar 19, 2009 at 1:48 PM, John Bergstrom <hillstr...@gmail.com>wrote:

> Hi,
>
> Can anyone tell me if Hadoop is appropriate for the following application.
>
> I need to perform optimization using a single, small input data set.
> To get a good result I must make many independent runs of the
> optimizer, where each run is initiated with a different starting
> point. At completion, I just choose the best solution from all of the
> runs. So my problem is not that I'm working with big data, I just want
> to speed up my run time by linking several Ubuntu desktops that are
> available to me. The optimizer is written in ANSI C.
>
> Thanks,
>
> John
>

Reply via email to