> if you look closely enough, this kind of breaks down.  numa
> machines are pretty popular these days (opteron, intel qpi-based
> processors).  it's possible with a modest loss of performance to
> share memory across processors and not worry about it.

Way back in the dim times when hypercubes roamed
the earth, I played around a bit with parallel machines.
When I was writing my master's thesis, I tried to find
a way to dispell the idea that shared-memory vs interconnection
network was as bipolar as the terms multiprocessor and
multicomputer would suggest.  One of the few things
in that work that I think still makes sense is characterizing
the degree of coupling as a continuum based on the ratio
of bytes transferred between CPUs to bytes accessed in
local memory.  So C.mmp would have a very high degree
of coupling and s...@home would have a very low degree
of coupling.  The upshot is that if I have a fast enough
network, my degree of coupling is high enough that I
don't really care whether or how much memory is local
and how much is on the other side of the building.

Of course, until recently, the rate at which CPU fetches
must be to keep the pipeline full has grown much
faster than network speeds.  So the idea of remote
memory hasn't been all that useful.  However, I wouldn't
be surprised to see that change over the next 10 to 20
years.  So maybe my local CPU will gain access to most
of its memory by importing /dev/memctl from a memory
server (1/2 :))

BLS


Reply via email to