On 05/17/2013 10:01 AM, Joe Landman wrote:
> That said, putting 192 cores in 48 compute nodes, along with 1/4 PB of
> storage in a 4U rack mount container is pretty darned awesome.  And the
> CPUs will get faster and more efficient over time, so the HPC comment
> likely has an expiration date on it.
>
> Add to this, they run Ubuntu, Debian, and Fedora.  Easy tool stack, most
> stuff just works.  FP heavy and memory intensive code ... not so well.
> Integer heavy code, pretty well.

Playing devil's advocate here, but am honestly interested to know:
Isn't there a tacit expectation that if you are moving tons of data 
really fast, it will be "memory intensive"?  Are you not moving that 
data from disk into memory first, and then doing a bit (or a lot) of 
work on it and dumping it back out to disk or dumping it completely?

Maybe I'm screwing up the definition of "memory intensive" though... 
(Does memory-bound == memory-intensive?  I think no, but could be wrong.)

Best,

ellis
_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to