On Wed, Oct 14, 2009 at 2:21 PM, Tim Newsham <news...@lava.net> wrote:
> I'm not familiar with the berkeley work.

Sorry I can't readily find the paper (the URL is somewhere on IMAP @Sun :-()
But it got presented at the Birkeley ParLab overview given to us by
Dave Patterson.
They were talking thin hypervisors and that sort of stuff. More details here:
   http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-23.pdf
(page 10) but still no original paper in sight...

> I'm still digesting it.  My first thoughts were that if my pc is a
> distributed heterogeneous computer

It may very well be that, but why would you want to manage that complexity?
Your GPU is already heavily "multicore", yet you don't see it (and you really
don't want to see it!)

> The mention that "... the overhead of cache coherence restricts the ability
> to scale up to even 80 cores" is also eye openeing. If we're at aprox 8
> cores today, thats only 5 yrs away (if we double cores every
> 1.5 yrs).

A couple of years ago we had a Plan9 summit @Google campus and Ken was
there. I still remember the question he asked me: what exactly would you make
all those core do on your desktop?

Frankly, I still don't have a good answer.

Thanks,
Roman.

Reply via email to