On Tue, Aug 25, 2009 at 2:57 AM, Steve Loughran <ste...@apache.org> wrote:

> On petascale-level computers, the application codes' CPU instructions are
>> about 10% floating point (that is, in scientific applications, there are
>> less floating point instructions than in most floating point benchmarks).
>>  Of the remaining instructions, about 1/3 are memory-related and 2/3 are
>> integer.  Of the integer instructions, 40% are computing memory locations.
>>
>
> cool. I wonder what percentage is vtable/function lookup in OO code versus
> data retrieval? After all, every time you read or write an instance field in
> a class instance, there is the this+offset maths before the actual
> retrieval, though there is a fair amount of CPU support for such offset
> operations


Since the codes running on these machines tend to be matrix oriented
Fortran, I would expect almost all of this is array index computation.


>
>
>> So, on the biggest DOE computers, about 50% of the CPU time is spent on
>> memory-related computations.  I found this pretty mind-boggling when I
>> learned this.  It seems to me that the "central" part of the computer is
>> becoming the bus, not the CPU.
>>
>
> welcome to the new bottlenecks.
>

Comm ACM had a recent article in which the author measured (informally) the
throughput available for Disk, SSD and main memory with sequential and
random access patterns.  Sequential disk was slightly faster than random
access to main memory.

-- 
Ted Dunning, CTO
DeepDyve

Reply via email to