Just as a point (or two) of clarification, the HP (actually, Intel) processors 
are called "Itanium" not "Itanic". HP is the primary vendor of hardware using 
Itanium, but the chips are made Intel.

Kevin

-----Original Message-----
From: [email protected] [mailto:[email protected]] On 
Behalf Of John Summerfield
Sent: Monday, June 22, 2009 1:27 AM
To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list
Subject: Re: [rhelv5-list] OT cluster or separate machines?

bryan wrote:
> Hi Everyone
> 
> I thought I'd post this one here and get a general feel for what people 
> tend to do given this scenario..
> 
> I have some money to spend on servers and have carte blanche to do 
> whatever I like as I see fit with the budget I have.
> 
> This particular group uses statistical genetics software to essentially 
> crunch through numbers and output a result at the end. Having looked at 
> a few of the programs they take up 100% processor usage once they kick 
> off. On the multicore machines they just take over another one of the 
> idle processors and that similarly runs at 100% until the calculations 
> are done.
> 
> Typically users will log into a free machine and set off an analysis 
> process - the process has been known to last a couple of weeks on the 
> bigger chromasomes and datasets in returning all the data they require.
> 
> So onto my question - would I be better in linking the machines as a 
> cluster which is all new to me, or just using separate machines as we 
> currently do?
> 
> The budget will buy six dell quad core opterons PE2970 machines from my 
> first quote back from dell.
> 
> I'm more than happy to read and learn so if anyone has some good links 
> I'd be grateful, similarly if you run something like this I'd be glad to 
> hear of your experience.

I don't have any particular expertise, but out of idle curiosity I do a 
bit of research sometimes.


First thought is a multiprocessor multicore system. You could also 
research IBM's Power and Sun's Sparc (and HP's Itanic) systems.

Those systems will allow you to continue to work pretty much as you do 
now, all the number crunching for a task will be done on a single CPU 
(at a time), but the load will be balanced better and implementation 
would be pretty straightforward.

I think a Beowulf cluster or the like might suite better for the longer 
term, they allow the load to be moved around. However, for best results, 
you need your algorithms need to be designed to benefit from parallel 
computation, and I think you might need different compilers.

Look at top500.org for ideas on what can be done. There has to be 
someone at those sites (or their suppliers, often IBM) who can educate 
you about how.

You should also evaluate Intel's icc compiler. I've never used it, but 
reputedly it produces better code than gcc.




-- 

Cheers
John

-- spambait
[email protected]  [email protected]
-- Advice
http://webfoot.com/advice/email.top.php
http://www.catb.org/~esr/faqs/smart-questions.html
http://support.microsoft.com/kb/555375

You cannot reply off-list:-)

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to