Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-19 Thread Josh Gargus

On Apr 12, 2012, at 5:12 PM, BGB wrote:

 On 4/11/2012 11:14 PM, Josh Gargus wrote:
 On Apr 8, 2012, at 7:31 PM, BGB wrote:
 
 now, why, exactly, would anyone consider doing rendering on the server?...
 
 One reason might be to amortize the cost of  global illumination 
 calculations.  Since much of the computation is view-independent, a Really 
 Big Server could compute this once per frame and use the results to render a 
 frame from the viewpoint of each connected client.  Then, encode it with 
 H.264 and send it downstream.  The total number of watts used could be much 
 smaller, and the software architecture could be much simpler.
 
 I suspect that this is what OnLive is aiming for... supporting existing 
 PC/console games is an interim step as they try to boot-strap a platform 
 with enough users to encourage game developers to make this leap.
 
 but, the bandwidth and latency requirements would be terrible...

What do you mean by terrible?  1MB/s is quite good quality video.  Depending on 
the type of game, up to 100ms of latency is OK.


 
 nevermind that currently, AFAIK, no HW exists which can do full-scene 
 global-illumination in real-time (at least using radiosity or similar),

You somewhat contradict yourself below, when you argue that clients can already 
do small-scale real-time global illumination (no fair to argue that it's not 
computationally tractable on the server, but it can already be done on the 
client).

Also, Nvidia could churn out such hardware in one product cycle, if it saw a 
market for it.  Contrast this to the uncertainty of how long well have to wait 
for the hypothetical battery breakthrough that you mention below.


 much less handle this *and* do all of the 3D rendering for a potentially 
 arbitrarily large number of connected clients.

Just to be clear, I've been making an implicit assumption about these 
hypothetical ultra-realistic game worlds: that the number of FLOPs spent on 
physics/GI would be 1-2 orders of magnitude greater than the FLOPs to render 
the scene from a particular viewpoint.  If this is true, then it's not so 
expensive to render each additional client.  If it's false, then everything I'm 
saying is nonsense.


 another problem is that there isn't much in the rendering process which can 
 be aggregated between clients which isn't already done (between frames, or 
 ahead-of-time) in current games.

I'm explicitly not talking about current games.


 
 in effect, the rendering costs at the datacenter are likely to scale linearly 
 with the number of connected clients, rather than at some shallower curve.

Asymptotically, yes it would be linear, except for the big chunk of 
global-illumination / physics simulation that could be amortized.  And the 
higher you push the fidelity of the rendering, the bigger this chunk to be 
amortized.


 
 much better I think is just following the current route:
 getting client PCs to have much better HW, so that they can do their own 
 localized lighting calculations (direct illumination can already be done in 
 real-time, and global illumination can be done small-scale in real-time).

I understand, that's what you think :-)


 
 the cost at the datacenters is also likely to be much lower, since they need 
 much less powerful servers, and have to spend much less money on electricity 
 and bandwidth.

Money spent on electricity and bandwidth is irrelevant, as long as there is a 
business model that generates revenue that grows (at least) linearly with 
resource usage.  I'm speculating that such a business model might be possible.


 
 likewise, the total watts used tends to be fairly insignificant for an end 
 user (except when operating on batteries), since PC power-use requirements 
 are small vs, say, air-conditioners or refrigerators, whereas people running 
 data-centers have to deal with the full brunt of the power-bill.

See above.


 
 the power-use issue (for mobile devices) could, just as easily, be solved by 
 some sort of much higher-capacity battery technology (say, a laptop or 
 cell-phone battery which, somehow, had a capacity well into the kVA range...).

It would have to be a huge breakthrough.  Desktop GPUs are still (at least) an 
order of magnitude too slow for this type of simulation, and they draw 200W.  
This is roughly 2 orders of magnitude greater than an iPad.  And then there's 
the question of heat dissipation.

It's still a good point.  I never meant to imply that a server-rendering 
video-streaming architecture is be-all-end-all-optimal, but your point brings 
this into clearer focus.


 
 at this point, people wont really care much if, say, plugging in their 
 cell-phone to recharge is drawing, say, several amps, given power is 
 relatively cheap in the greater scheme of things (and, assuming migration 
 away from fossil fuels, could likely still get considerably cheaper over 
 time).
 
 meanwhile, no obvious current/near-term technology is likely to make internet 
 bandwidth considerably 

Re: [fonc] Smalltalk-75

2012-04-19 Thread Alan Kay
Well, part of it is that the 15 year old was exceptional -- his name is Steve 
Putz, and as with several others of our children programmers -- such as Bruce 
Horn, who was the originator of the Mac Finder -- became a very good 
professional.

And that Smalltalk (basically Smalltalk-72) was quite approachable for 
children. We also had quite a few exceptional 12 year old girls who did 
remarkable applications.

Even so, Steve Putz's circuit diagram drawing program was terrific! Especially 
the UI he designed and built for it.

Cheers,

Alan





 From: John Pratt jpra...@gmail.com
To: fonc@vpri.org 
Sent: Thursday, April 19, 2012 4:05 PM
Subject: [fonc] Smalltalk-75
 


How is it that a 15-year-old could program a schematic diagram drawing 
application in the 1970's?  Is there any more information about this?

I think I read that Smalltalk changed afterwards.  Isn't this kind of a big 
deal, everyone?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Smalltalk-75

2012-04-19 Thread Fernando Cacciola
On Thu, Apr 19, 2012 at 9:43 PM, Alan Kay alan.n...@yahoo.com wrote:
 Well, part of it is that the 15 year old was exceptional -- his name is
 Steve Putz, and as with several others of our children programmers -- such
 as Bruce Horn, who was the originator of the Mac Finder -- became a very
 good professional.

 And that Smalltalk (basically Smalltalk-72) was quite approachable for
 children. We also had quite a few exceptional 12 year old girls who did
 remarkable applications.

I was curious, so I googled a bit (impressive how easy it is, these
days, to find something within a couple of minutes)

The girls you are most likely talking about would be: Marion Goldeen
and Susan Hamet, who created a painiting and a OOP-Illustration
system, respectively.
I've found some additional details and illustrations here:
http://www.manovich.net/26-kay-03.pdf

What is truly remarkable IMO, is Smalltalk (even -72). Because these
children might have been exceptional, but IIUC is not like they were,
say, a forth-generation of mathematicians and programmers who learned
how to assemble a computer at age 3 :)


Best

-- 
Fernando Cacciola
SciSoft Consulting, Founder
http://www.scisoft-consulting.com
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc