On 4/11/2012 11:14 PM, Josh Gargus wrote:
On Apr 8, 2012, at 7:31 PM, BGB wrote:

now, why, exactly, would anyone consider doing rendering on the server?...

One reason might be to amortize the cost of  global illumination calculations.  
Since much of the computation is view-independent, a Really Big Server could 
compute this once per frame and use the results to render a frame from the 
viewpoint of each connected client.  Then, encode it with H.264 and send it 
downstream.  The total number of watts used could be much smaller, and the 
software architecture could be much simpler.

I suspect that this is what OnLive is aiming for... supporting existing 
PC/console games is an interim step as they try to boot-strap a platform with 
enough users to encourage game developers to make this leap.

but, the bandwidth and latency requirements would be terrible...

nevermind that currently, AFAIK, no HW exists which can do full-scene global-illumination in real-time (at least using radiosity or similar), much less handle this *and* do all of the 3D rendering for a potentially arbitrarily large number of connected clients.

another problem is that there isn't much in the rendering process which can be aggregated between clients which isn't already done (between frames, or ahead-of-time) in current games.

in effect, the rendering costs at the datacenter are likely to scale linearly with the number of connected clients, rather than at some shallower curve.


much better I think is just following the current route:
getting client PCs to have much better HW, so that they can do their own localized lighting calculations (direct illumination can already be done in real-time, and global illumination can be done small-scale in real-time).

the cost at the datacenters is also likely to be much lower, since they need much less powerful servers, and have to spend much less money on electricity and bandwidth.

likewise, the total watts used tends to be fairly insignificant for an end user (except when operating on batteries), since PC power-use requirements are small vs, say, air-conditioners or refrigerators, whereas people running data-centers have to deal with the full brunt of the power-bill.

the power-use issue (for mobile devices) could, just as easily, be solved by some sort of much higher-capacity battery technology (say, a laptop or cell-phone battery which, somehow, had a capacity well into the kVA range...).

at this point, people wont really care much if, say, plugging in their cell-phone to recharge is drawing, say, several amps, given power is relatively cheap in the greater scheme of things (and, assuming migration away from fossil fuels, could likely still get considerably cheaper over time).

meanwhile, no obvious current/near-term technology is likely to make internet bandwidth considerably cheaper, or latency significantly lower, ...

even with fairly direct fiber-optic connections, long distance ping times are still likely to be an issue, and it is much harder to LERP video, so short of putting the servers in a geographically nearby location (like, in the same city as the user), or somehow bypassing the speed of light, it isn't all that likely that people are going to really much exceed (in general) about 50-100ms ping (with a world average of likely closer to about 400ms ping).

this would lead to a generally unsatisfying gaming experience, as there would be an obvious delay between attempting an action and the results of this action becoming visible (which, at least, with local rendering, the results of ping times can be partly glossed over). (video quality and framerate are currently also issues, but could improve over time as overall bandwidth improves).

to deliver a high quality experience with point-to-point video, likely a ping time of around 10-20ms would be needed, which could then compete with the frame-rates of locally rendered video. at a 15ms ping then results would be "immediately" visible with a 30Hz frame-rate (it wouldn't be obviously different from being locally rendered).


granted, this "could" change if people either manage to develop faster-than-light communication faster than they manage better GPUs and/or higher-capacity battery technology, or people become generally tolerate of the latencies involved.


granted, "hybrid" strategies could just as easily work:
a lot of "general visibility" is handled on the servers, and pushed down as video streams, with the actual rendering being done on the client (essentially streamed video-mapped textures).

by analogy, this would be sort of like if people could use YouTube videos as textures in a 3D scene.


or such...

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to