On Mon, Dec 19, 2011 at 01:02:28PM -0800, Steve Dekorte wrote:

> Suppose you want to write an app to help people organize events. 
> Neither the development or running the app is compute bound 
> and a machine 1000x faster in itself likely wouldn't much with either.

Suppose I need to simulate 10^12 neurons with a full compartmental
model in realtime. Or render photorealistic 4K video for an 
interactive virtual world. Or optimize a vehicle for atmospheric 
reentry. Simulate climate. Fold proteins. Map a barren parameter 
space for barren areas. Optimize circuit layout on silicon. 
Analyze a NEMS device in a hybrid model, with some 10 atoms given 
full QM treatment.
 
> However, using a garbage collected OO language would. So in as 
> much as faster machines lower the cost of higher abstractions, they
> are helpful for programming. But we are already at the point where
> most of our time programming is sitting in front of an idle machine
> trying to tell it what to do. 

Massively parallel systems would eliminate the coding and letting
you specify the boundary conditions. Or evaluate system behaviour
for better solutions.
 
> I can't make a hard case for it, but I'd suggest that most of the
> utility we've gained from computers has been from communication 
> and organization for more efficient resource allocation, that
> the development of tools for these areas is the largest bottleneck
> to maximizing the utility of computers and that this is generally 
> not a compute bound problem.

I think the reason we've made so little progress is precisely because
we're computationally bound. Many solutions are suddenly viable if
everybody has access to nine orders of magnitude more storage and 
more performance.

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to