2009/6/22 bryan <[email protected]>:
> Hi Everyone
>
> I thought I'd post this one here and get a general feel for what people tend
> to do given this scenario..
>
> I have some money to spend on servers and have carte blanche to do whatever
> I like as I see fit with the budget I have.
>
> This particular group uses statistical genetics software to essentially
> crunch through numbers and output a result at the end. Having looked at a
> few of the programs they take up 100% processor usage once they kick off. On
> the multicore machines they just take over another one of the idle
> processors and that similarly runs at 100% until the calculations are done.
>
> Typically users will log into a free machine and set off an analysis process
> - the process has been known to last a couple of weeks on the bigger
> chromasomes and datasets in returning all the data they require.
>
> So onto my question - would I be better in linking the machines as a cluster
> which is all new to me, or just using separate machines as we currently do?
>

That really depends on whether you want to educate your users.

For a small set of users, the "find a free machine and run your job"
works well. For a large set of users and machines, a cluster works
better.

But to efficiently use a cluster, you need to educate your users to
run their jobs through some kind of queuing software. That software
will need to be told that each of your PE Servers can run 3 or 4 jobs
and then they just set it off through the queueing software and it
finds the machine with the lowest load and cranks out the job. The
advantage of this is that if you have more jobs than machine cores, it
will queue the extras.

If you want something RHEL based that has a lower entry point, have a
look at ROCKS: http://www.rocksclusters.org/ - it has sets of packages
for most things (and may include your stats packages) and types of
queuing software - All you need to do is install it on the head-end of
the cluster and then network boot the rest of the machines -
definitely easier than rolling your own...

-- 
Sam

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to