On Apr 12, 2012, at 5:12 PM, BGB wrote:
On 4/11/2012 11:14 PM, Josh Gargus wrote:
On Apr 8, 2012, at 7:31 PM, BGB wrote:
now, why, exactly, would anyone consider doing rendering on the server?...
One reason might be to amortize the cost of global illumination
calculations. Since much of the computation is view-independent, a Really
Big Server could compute this once per frame and use the results to render a
frame from the viewpoint of each connected client. Then, encode it with
H.264 and send it downstream. The total number of watts used could be much
smaller, and the software architecture could be much simpler.
I suspect that this is what OnLive is aiming for... supporting existing
PC/console games is an interim step as they try to boot-strap a platform
with enough users to encourage game developers to make this leap.
but, the bandwidth and latency requirements would be terrible...
What do you mean by terrible? 1MB/s is quite good quality video. Depending on
the type of game, up to 100ms of latency is OK.
nevermind that currently, AFAIK, no HW exists which can do full-scene
global-illumination in real-time (at least using radiosity or similar),
You somewhat contradict yourself below, when you argue that clients can already
do small-scale real-time global illumination (no fair to argue that it's not
computationally tractable on the server, but it can already be done on the
client).
Also, Nvidia could churn out such hardware in one product cycle, if it saw a
market for it. Contrast this to the uncertainty of how long well have to wait
for the hypothetical battery breakthrough that you mention below.
much less handle this *and* do all of the 3D rendering for a potentially
arbitrarily large number of connected clients.
Just to be clear, I've been making an implicit assumption about these
hypothetical ultra-realistic game worlds: that the number of FLOPs spent on
physics/GI would be 1-2 orders of magnitude greater than the FLOPs to render
the scene from a particular viewpoint. If this is true, then it's not so
expensive to render each additional client. If it's false, then everything I'm
saying is nonsense.
another problem is that there isn't much in the rendering process which can
be aggregated between clients which isn't already done (between frames, or
ahead-of-time) in current games.
I'm explicitly not talking about current games.
in effect, the rendering costs at the datacenter are likely to scale linearly
with the number of connected clients, rather than at some shallower curve.
Asymptotically, yes it would be linear, except for the big chunk of
global-illumination / physics simulation that could be amortized. And the
higher you push the fidelity of the rendering, the bigger this chunk to be
amortized.
much better I think is just following the current route:
getting client PCs to have much better HW, so that they can do their own
localized lighting calculations (direct illumination can already be done in
real-time, and global illumination can be done small-scale in real-time).
I understand, that's what you think :-)
the cost at the datacenters is also likely to be much lower, since they need
much less powerful servers, and have to spend much less money on electricity
and bandwidth.
Money spent on electricity and bandwidth is irrelevant, as long as there is a
business model that generates revenue that grows (at least) linearly with
resource usage. I'm speculating that such a business model might be possible.
likewise, the total watts used tends to be fairly insignificant for an end
user (except when operating on batteries), since PC power-use requirements
are small vs, say, air-conditioners or refrigerators, whereas people running
data-centers have to deal with the full brunt of the power-bill.
See above.
the power-use issue (for mobile devices) could, just as easily, be solved by
some sort of much higher-capacity battery technology (say, a laptop or
cell-phone battery which, somehow, had a capacity well into the kVA range...).
It would have to be a huge breakthrough. Desktop GPUs are still (at least) an
order of magnitude too slow for this type of simulation, and they draw 200W.
This is roughly 2 orders of magnitude greater than an iPad. And then there's
the question of heat dissipation.
It's still a good point. I never meant to imply that a server-rendering
video-streaming architecture is be-all-end-all-optimal, but your point brings
this into clearer focus.
at this point, people wont really care much if, say, plugging in their
cell-phone to recharge is drawing, say, several amps, given power is
relatively cheap in the greater scheme of things (and, assuming migration
away from fossil fuels, could likely still get considerably cheaper over
time).
meanwhile, no obvious current/near-term technology is likely to make internet
bandwidth considerably