[fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread BGB
(changed subject, as this was much more about physics simulation than 
about concurrency).


yes, this is a big long "personal history dump" type thing, please 
ignore if you don't care.



On 4/3/2012 10:47 AM, Miles Fidelman wrote:

David Barbour wrote:


Control flow is a source of much implicit state and accidental 
complexity.


A step processing approach at 20Hz isn't all bad, though, since at 
least you can understand the behavior of each frame in terms of the 
current graph of objects. The only problem with it is that this 
technique doesn't scale. There are easily up to 15 orders of 
magnitude in update frequency between slow-updating and fast-updating 
data structures. Object graphs are similarly heterogeneous in many 
other dimensions - trust and security policy, for example.


Hah.  You've obviously never been involved in building a CGF simulator 
(Computer Generated Forces) - absolute spaghetti code when you have to 
have 4 main loops, touch 2000 objects (say 2000 tanks) every 
simulation frame.  Comparatively trivial if each tank is modeled as a 
process or actor and you run asynchronously.




I have not encountered this term before, but does it have anything to do 
with an RBDE (Rigid Body Dynamics Engine), or often called simply a 
"physics engine".

this would be something like Havok or ODE or Bullet or similar.

I have written such an engine before, but my effort was single-threaded 
(using a fixed-frequency virtual timer, with time-step subdivision to 
deal with fast-moving objects).


probably would turn a bit messy though if it had to be made internally 
multithreaded (it is bad enough just trying to deal with irregular 
timesteps, blarg...).


however, it was originally considered to potentially run in a separate 
thread from the main 3D engine, but I never really bothered as there 
turned out to not be much point.



granted, one could likely still parallelize it while keeping everything 
frame-locked though, like having the threads essentially just subdivide 
the scene-graph and each work on a certain part of the scene, doing the 
usual thing of all of them predicting/handling contacts within a single 
time step, and then all updating positions in-sync, and preparing for 
the next frame.


in the above scenario, the main cost would likely be how to best go 
about efficiently dividing up work among the threads (the usual strategy 
I use is work-queues, but I have doubts regarding their scalability).


side note:
in my own experience, simply naively handling/updating all objects 
in-sequence doesn't tend to work out very well when mixed with things 
like contact forces (example: check if object can make move, if so, 
update position, move on to next object, ...). although, this does work 
reasonably well for "Quake-style" physics (where objects merely update 
positions linearly, and have no actual contact forces).


better seems to be:
for all moving objects, predict where the object wants to be in the next 
frame;

determine which objects will collide with each other;
calculate contact forces and apply these to objects;
update movement predictions;
apply movement updates.

however, interpenetration is still not avoided (sufficient forces will 
still essentially push objects into each other). theoretically, one can 
disallow interpenetration (by doing like Quake-style physics and simply 
disallow any post-contact updates which would result in subsequent 
interpenetration), but in my prior attempts to enable such a feature, 
the objects would often become "stuck" and seemingly entirely unable to 
move, and were in-fact far more prone to violently explode (a pile of 
objects will seemingly become stuck-together and immovable, maybe for 
several seconds, until ultimately all of them will violently explode 
outward at high velocities).


allowing objects to interpenetrate was thus seen as the "lesser evil", 
since, even though objects were violating the basic assumption that 
"rigid bodies aren't allowed to exist in the same place at the same 
time", typically (assuming the collision-detection and force-calculation 
functions are working correctly, itself easier said than done), this 
will generally correct itself reasonably quickly (the contact forces 
will push the objects back apart, until they reach a sort of 
equilibrium), and with far less incidence of random "explosions".


sadly, the whole physics engine ended up a little "rubbery" as a result 
of all of this, but it seemed reasonable, as I have also observed 
similar behavior to some extent in Havok, and have figured out that I 
could deal with matters well enough by using a simpler (Quake-style) 
physics engine for most non-dynamic objects. IOW: things using AABBs 
(Axis-Aligned Bounding-Box) and similar, and other related "solid 
objects which can't undergo rotation", a very naive "check and update" 
strategy works fairly well for objects which can only ever undergo 
translational movement.


admittedly, I also never was able 

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread Miles Fidelman

BGB wrote:


On 4/3/2012 10:47 AM, Miles Fidelman wrote:


Hah.  You've obviously never been involved in building a CGF 
simulator (Computer Generated Forces) - absolute spaghetti code when 
you have to have 4 main loops, touch 2000 objects (say 2000 tanks) 
every simulation frame.  Comparatively trivial if each tank is 
modeled as a process or actor and you run asynchronously.


I have not encountered this term before, but does it have anything to 
do with an RBDE (Rigid Body Dynamics Engine), or often called simply a 
"physics engine". this would be something like Havok or ODE or Bullet 
or similar.


There is some overlap, but only some - for example, when modeling 
objects in flight (e.g., a plane flying at constant velocity, or an 
artillery shell in flight) - but for the most part, the objects being 
modeled are active, and making decisions (e.g., a plane or tank, with a 
simulated pilot, and often with the option of putting a person-in-the-loop).


So it's really impossible to model these things from the outside (forces 
acting on objects), but more from the inside (run decision-making code 
for each object).


Miles

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread BGB

On 4/3/2012 9:29 PM, Miles Fidelman wrote:

BGB wrote:


On 4/3/2012 10:47 AM, Miles Fidelman wrote:


Hah.  You've obviously never been involved in building a CGF 
simulator (Computer Generated Forces) - absolute spaghetti code when 
you have to have 4 main loops, touch 2000 objects (say 2000 tanks) 
every simulation frame.  Comparatively trivial if each tank is 
modeled as a process or actor and you run asynchronously.


I have not encountered this term before, but does it have anything to 
do with an RBDE (Rigid Body Dynamics Engine), or often called simply 
a "physics engine". this would be something like Havok or ODE or 
Bullet or similar.


There is some overlap, but only some - for example, when modeling 
objects in flight (e.g., a plane flying at constant velocity, or an 
artillery shell in flight) - but for the most part, the objects being 
modeled are active, and making decisions (e.g., a plane or tank, with 
a simulated pilot, and often with the option of putting a 
person-in-the-loop).


So it's really impossible to model these things from the outside 
(forces acting on objects), but more from the inside (run 
decision-making code for each object).




fair enough...

but, yes, very often in cases where one is using a physics engine, this 
may be combined with the use of internal logic and forces as well, 
albeit admittedly there is a split:
technically, these forces are applied directly by whatever code is using 
the physics engine, rather than by the physics engine itself.


for example: just because it is a physics engine doesn't mean that it 
necessarily has to be "realistic", or that objects can't supply their 
own forces.


I guess, however, that this would be closer to the main "server end" in 
my case, namely the part that manages the entity system and NPC AIs and 
similar (and, also, the game logic is more FPS style).


still not heard the term CGF before though.


in this case, the basic timestep update is basically to loop over all 
the entities in the scene and calls their "think" methods and similar 
(things like AI and animation and similar are generally handled via 
think methods and similar), and maybe do things like updating physics 
(if relevant), ...


this process is single threaded with a single loop though.

I guess it is arguably "event-driven" though:
handling timing is done via events ("think" being a special case);
most interactions between entities involve events as well;
...

many entities and AIs are themselves essentially finite-state-machines.


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread Miles Fidelman

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and SAF 
(Semi-Automated Forces) are the equivalent terms of art to "game 
engine."  Sort of.





--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to "game 
engine."  Sort of.




"military simulations" as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations used 
by actual military, rather than for purposes of gaming?...).


Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about "game engines" and RTS though.


I guess maybe this confusion is sort of like the confusion over the use 
of the term "brush" for "a piece of static world geometry typically 
defined in terms of a complex polyhedron represented in terms of a 
collection of bounding planes (but may also potentially include bezier 
patches and mesh geometry)".


but, as-is, there is no clearly better term for this, so people have to 
live with it (not like Quake / Source / Unreal / ... don't use the same 
term, grr...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread Miles Fidelman

BGB wrote:

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to "game 
engine."  Sort of.




"military simulations" as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations 
used by actual military, rather than for purposes of gaming?...).


Well, there are really two types of simulations in use in the military 
(at least that I'm familiar with):


- very detailed engineering models of various sorts (ranging from device 
simulations to simulations of say, a sea-skimming missile vs. a gattling 
gun point-defense weapon).  (think MATLAB and SIMULINK type models)


- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying missions 
in a networked simulator (and saving jet fuel); or decision makers 
practicing in simulated command posts -- simulators take the form of 
both person-in-the-loop (e.g., flight sim. with a real pilot) and 
CGF/SAF (an enemy brigade is simulated, with information inserted into 
the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about "game engines" and RTS though.


Maybe check out 
http://www.mak.com/products/simulate/computer-generated-forces.html for 
an example of a CGF.


Cheers,

Miles

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to "game 
engine."  Sort of.




"military simulations" as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations 
used by actual military, rather than for purposes of gaming?...).


Well, there are really two types of simulations in use in the military 
(at least that I'm familiar with):


- very detailed engineering models of various sorts (ranging from 
device simulations to simulations of say, a sea-skimming missile vs. a 
gattling gun point-defense weapon).  (think MATLAB and SIMULINK type 
models)




don't know much all that much about MATLAB or SIMULINK, but do know 
about things like FEM (Finite Element Method) and CFD (Computational 
Fluid Dynamics) and similar.


(left out a bunch of stuff, mostly about FEM, CFD, and particle systems, 
in games technology and wondering about how some of this stuff compares 
with their analogues as used in an engineering context).



- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying missions 
in a networked simulator (and saving jet fuel); or decision makers 
practicing in simulated command posts -- simulators take the form of 
both person-in-the-loop (e.g., flight sim. with a real pilot) and 
CGF/SAF (an enemy brigade is simulated, with information inserted into 
the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to PCs?...

I had mostly heard about military people doing all of this stuff using 
decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.





Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about "game engines" and RTS though.


Maybe check out 
http://www.mak.com/products/simulate/computer-generated-forces.html 
for an example of a CGF.




looked briefly, yes, ok.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread Miles Fidelman

BGB wrote:

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying 
missions in a networked simulator (and saving jet fuel); or decision 
makers practicing in simulated command posts -- simulators take the 
form of both person-in-the-loop (e.g., flight sim. with a real pilot) 
and CGF/SAF (an enemy brigade is simulated, with information inserted 
into the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to 
PCs?...


Not so sure.  Probably similar levels of complexity between a military 
sim. and, say, World of Warcraft.  Fidelity to real-world behavior is 
more important, and network latency matters for the extreme real-time 
stuff (e.g., networked dogfights at Mach 2), but other than that, IP 
networks, gaming class PCs at the endpoints, serious graphics 
processors.  Also more of a need for interoperability - as there are 
lots of different simulations, plugged together into lots of different 
exercises and training scenarios - vs. a MMORPG controlled by a single 
company.


I had mostly heard about military people doing all of this stuff using 
decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.
In terms of jet fuel, travel costs, and other logistics, absolutely.  
But... when you figure in the huge dollars spent paying large systems 
integrators to write software, I'm not sure how much cheaper it all 
becomes.  (The big systems integrators are not known for brilliance of 
their coders, or efficiencies in their process -- not a lot of 20-hour 
days, by 20-somethings betting on their stock options.  A lot of good 
people, but older, slower, more likely to put family first; plus a lot 
of organizational overhead built into the prices.)




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 1:06 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

- game-like simulations (which I'm more familiar with): but these 
are serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and 
tactics, and so forth; or pilots training in team techniques by 
flying missions in a networked simulator (and saving jet fuel); or 
decision makers practicing in simulated command posts -- simulators 
take the form of both person-in-the-loop (e.g., flight sim. with a 
real pilot) and CGF/SAF (an enemy brigade is simulated, with 
information inserted into the simulation network so enemy forces 
show up on radar screens, heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to 
PCs?...


Not so sure.  Probably similar levels of complexity between a military 
sim. and, say, World of Warcraft.  Fidelity to real-world behavior is 
more important, and network latency matters for the extreme real-time 
stuff (e.g., networked dogfights at Mach 2), but other than that, IP 
networks, gaming class PCs at the endpoints, serious graphics 
processors.  Also more of a need for interoperability - as there are 
lots of different simulations, plugged together into lots of different 
exercises and training scenarios - vs. a MMORPG controlled by a single 
company.




ok, so basically a heterogeneous MMO.


reading some stuff (an overview for the DIS protocol, ...), it seems 
that the "level of abstraction" is in some ways a bit higher (than game 
protocols I am familiar with), for example, it will indicate the "entity 
type" in the protocol, rather than, say, the name of, its 3D model.


however, it appears fairly low-level in some ways as well, using 
magic-numbers in place of, say, "entity type names", as well as 
apparently being generally byte-oriented.



in my case, my network protocol is currently based more on the use of 
specially-compressed lists / S-Expressions (with the compression and 
"message protocol" existing as separate and independent layers).


the lower-layer is concerned primarily with efficiently and compactly 
serializing the messages, but doesn't concern itself much with the 
contents of said messages. it is list-based, but theoretically, also 
supporting XML or JSON wouldn't likely be terribly difficult.


the upper-layer is mostly concerned with the message contents, and 
doesn't really care how or where they are transmitted.


I originally considered XML for the message protocol, but ended up 
opting with lists as they were both less effort and more efficient in my 
case. lists are easier to compose and process, generally require less 
memory, and natively support numeric types, ...



most entity fields are identified mnemonics in the protocol (such as 
"org" for origin, "ang" for angles or "rot" for rotation). entity types 
are given both as type-names and also as names for 3D models/sprites/...


I personally generally dislike the use of magic numbers, and in most 
cases they are avoided. some magic numbers exist though, mostly in the 
case of things like "effects flags" and similar (for stuff like whether 
an entity glows, spins, ...).


however, this doesn't mean that any strings are endlessly re-sent, as 
the protocol will compress these (typically into single Huffman-coded 
values). note that recently encoded values may be reused.


beyond entities, things like geometry and light-sources can also be 
synchronized.


nothing obvious comes to mind for why it wouldn't scale, would probably 
just split the world across multiple servers (by area) and have the 
clients hop between servers as needed (with some server-to-server 
communication).


probably, free-form client-to-client messages would also make sense, and 
maybe also the ability to broadcast messages more like in a chat-style 
system. this way, specialized clients could devise their own specialized 
messages.


(currently, I am not doing anything of the sort, mostly focusing more on 
small-scale network gaming).


...


(if by any chance anyone wants code or specs for any of this stuff, they 
can email me off-list...).



I had mostly heard about military people doing all of this stuff 
using decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.
In terms of jet fuel, travel costs, and other logistics, absolutely.  
But... when you figure in the huge dollars spent paying large systems 
integrators to write software, I'm not sure how much cheaper it all 
becomes.  (The big systems integrators are not known for brilliance of 
their coders, or efficiencies in their process -- not a lot of 20-hour 
days, by 20-somethings betting on their stock options.  A lot of good 
people, but older, slower, more likely to p

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread Miles Fidelman

BGB wrote:


Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.





ok, so basically a heterogeneous MMO.

and distributed




reading some stuff (an overview for the DIS protocol, ...), it seems 
that the "level of abstraction" is in some ways a bit higher (than 
game protocols I am familiar with), for example, it will indicate the 
"entity type" in the protocol, rather than, say, the name of, its 3D 
model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image generation 
and position models maintained by dead reckoning) - what goes across the 
network are changes to it's velocity vector, and weapon fire events.  
The intent is to minimize the amount of data that has to be sent across 
the net, and to maintain speed of image generation by doing rendering 
locally.




nothing obvious comes to mind for why it wouldn't scale, would 
probably just split the world across multiple servers (by area) and 
have the clients hop between servers as needed (with some 
server-to-server communication).




There's been a LOT of work over the years, in the field of distributed 
simulation.  It's ALL about scaling, and most of the issues have to do 
with time-critical, cpu-intensive calcuations.




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-08 Thread BGB

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:


Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.





ok, so basically a heterogeneous MMO.

and distributed



well, yes, but I am not entirely sure how many non-distributed (single 
server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to deal 
with all of the users.


some older MMOs had "shards", where users on one server wouldn't be able 
to see what users on a different server were doing, but this is AFAIK 
generally not really considered acceptable in current MMOs (hence why 
the world would be divided up into "areas" or "regions" instead, 
presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of what 
a distributed-system is than one which allows a load-balanced 
client/server architecture.






reading some stuff (an overview for the DIS protocol, ...), it seems 
that the "level of abstraction" is in some ways a bit higher (than 
game protocols I am familiar with), for example, it will indicate the 
"entity type" in the protocol, rather than, say, the name of, its 3D 
model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image generation 
and position models maintained by dead reckoning) - what goes across 
the network are changes to it's velocity vector, and weapon fire 
events.  The intent is to minimize the amount of data that has to be 
sent across the net, and to maintain speed of image generation by 
doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the server?...

presumably, the server would serve mostly as a sort of message relay 
(bouncing messages from one client to any nearby clients), and 
potentially also handling physics (typically split between the client 
and server in FPS games, where the main physics is done on the server, 
such as to help prevent cheating and similar, as well as the server 
running any monster/NPC AI).


although less expensive for the server, client-side physics has the 
drawback of making it harder to prevent hacks (such as moving really 
fast and/or teleporting), typically instead requiring the use of 
detection and banning strategies.


ironically, all this leads to more MMOs using client-side physics, and 
more FPS games using server-side physics, with an MMO generally having a 
much bigger problem regarding cheating than an FPS.


typically (in an FPS or similar), rendering is purely client-side, and 
usually most network events are extrapolated (based on origin and 
velocity and similar), to compensate for timing between the client and 
server (and the results of network ping-time and similar).


it is desirable for players and enemies to be in about the right spot, 
even with maybe 250-750 ms or more between the client and server (though 
many 3D engines will kick players if the ping time is more than 2000 or 
3000 ms).



in my own 3D engine, it is partially split, currently with player 
movement physics being split between the client and server, and most 
other physics being server-side.


there is currently no physics involved in the entity extrapolation, 
although doing more work here could be helpful (mostly to avoid 
extrapolation occasionally putting things into walls or similar).



sadly, even single-player, it can still be a little bit of an issue 
dealing with the matter of the client and server updating at different 
frequencies (say, the "server" runs internally at 10Hz, and the "client" 
runs at 30Hz - 60Hz), so extrapolating the position is still necessary 
(camera movements at 10Hz are not exactly pleasant).


so, this leaves allowing the client-side camera to partly move 
independently of the "player" as known on the server, and using 
interpolation trickery to reconcile the client and server versions of 
the player's position, and occasionally using flags so deal with things 
like teleporters and similar (the player will be teleported on the 
server, which will send a flag to be like "you are here and looking this 
direction").



but, I meant "model" in this case more in the sense of the server sends 
a message more like, say:

(delta 492
(classname "npc_plane_fa18")
(org 6714 4932 5184)
(ang ...)
(vel ...)
...)

rather than, say, something like:
(delta 492
(model "model/plane/fa18/fa18.lwo")
(org 6714 

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-08 Thread Miles Fidelman

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed (single 
server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to deal 
with all of the users.


some older MMOs had "shards", where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into "areas" or "regions" 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.





reading some stuff (an overview for the DIS protocol, ...), it seems 
that the "level of abstraction" is in some ways a bit higher (than 
game protocols I am familiar with), for example, it will indicate 
the "entity type" in the protocol, rather than, say, the name of, 
its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.


ironically, all this leads to more MMOs using client-side physics, and 
more FPS games using server-side physics, with an MMO generally having 
a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for a 
pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the result 
of a design bug rather than cheating (though Capt. Kirk's "I don't 
believe in the no win scenario" line comes to mind).




There's been a LOT of work over the years, in the field of 
distributed simulation.  It's ALL about scaling, and most of the 
issues have to do with time-critical, cpu-intensive calcuations.




possibly, but I meant in terms of the scalability of using 
load-balanced servers (divided by area) and server-to-server message 
passing.


Nope.  Network latencies and bandwidth are the issue.  Just a little bit 
of jigger in the timing and pilots tend to hurl all over the 
simulators.  We're talking about repainting a high-res. display between 
20 to 40 times per second - you've got to drive that locally.




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


__

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/8/2012 8:26 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed 
(single server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to 
deal with all of the users.


some older MMOs had "shards", where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into "areas" or "regions" 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise and 
reliability wise.


not that all of the servers need to be run in a single location or be 
owned by a single company, but there are some general advantages to the 
client/server model.





reading some stuff (an overview for the DIS protocol, ...), it 
seems that the "level of abstraction" is in some ways a bit higher 
(than game protocols I am familiar with), for example, it will 
indicate the "entity type" in the protocol, rather than, say, the 
name of, its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.




ok, so sending polygons and images over the net.

so, by "very", is the implication that they are sending large numbers of 
1024x1024 or 4096x4096 texture-maps/tiles or similar?...


typically, I do most texture art at 256x256 or 512x512.

but, anyways, presumably JPEG or similar could probably make it work.


ironically, all this leads to more MMOs using client-side physics, 
and more FPS games using server-side physics, with an MMO generally 
having a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for 
a pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the 
result of a design bug rather than cheating (though Capt. Kirk's "I 
don't believe in the no win scenario" line comes to mind).




this is why most modern games use client/server.

some older games (such as Doom-based games) determined things like AI 
behaviors and damage o

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread Miles Fidelman

BGB wrote:

On 4/8/2012 8:26 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to 
real-world behavior is more important, and network latency 
matters for the extreme real-time stuff (e.g., networked 
dogfights at Mach 2), but other than that, IP networks, gaming 
class PCs at the endpoints, serious graphics processors.  Also 
more of a need for interoperability - as there are lots of 
different simulations, plugged together into lots of different 
exercises and training scenarios - vs. a MMORPG controlled by a 
single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed 
(single server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to 
deal with all of the users.


some older MMOs had "shards", where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into "areas" or "regions" 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs 
by and large run most of the simulation on centralized clusters (or 
at least within the vendor's cloud).  Military sims do EVERYTHING on 
the clients - there are no central machines, just the information 
distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise and 
reliability wise.


not that all of the servers need to be run in a single location or be 
owned by a single company, but there are some general advantages to 
the client/server model.

All I can say to this is that:

- this is how it's been done for years, all the way back to the original 
SIMNET project (the birthplace of distributed sims)


- pretty much all of this architecture was driven by the requirements to 
feed data at high speed to high-res. image generators (notably Evans and 
Sutherland stuff as derived from the flight sim. world)


- all the protocols for linking together, and interoperation among 
simulators are based on this model


- there's a lot of history and engineering experience, and billions of 
dollars of investment in this approach


- it ain't changing anytime soon



reading some stuff (an overview for the DIS protocol, ...), it 
seems that the "level of abstraction" is in some ways a bit higher 
(than game protocols I am familiar with), for example, it will 
indicate the "entity type" in the protocol, rather than, say, the 
name of, its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth 
in image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - 
VERY high resolution imagery, very fast movement.  Before the 
simulation starts, terrain data and imagery are distributed in 
advance - every simulator has all the data needed to generate an 
out-the-window view, and to do terrain calculations (e.g., 
line-of-sight) locally.




ok, so sending polygons and images over the net.

so, by "very", is the implication that they are sending large numbers 
of 1024x1024 or 4096x4096 texture-maps/tiles or similar?...


more - we're not talking a high-def. tv here, we're talking we're 
talking about painting a hemisphere providing a realistic 
out-the-cockpit view from a fighter - that's a lot of pixels being 
updated every second
ironically, all this leads to more MMOs using client-side physics, 
and more FPS games using server-side physics, with an MMO generally 
having a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-re

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread David Barbour
On Mon, Apr 9, 2012 at 8:25 AM, BGB  wrote:

>
>> Running on a cluster is very different between having all the
>> intelligence on the individual clients.  As far as I can tell, MMOs by and
>> large run most of the simulation on centralized clusters (or at least
>> within the vendor's cloud).  Military sims do EVERYTHING on the clients -
>> there are no central machines, just the information distribution protocol
>> layer.
>>
>
> yes, but there are probably drawbacks with this performance-wise and
> reliability wise.
>

There are some security and performance drawbacks. It would be easy to
`cheat` any of the simulation protocols used by military sims. But there
isn't much motive to do so; it isn't as though you win virtual items to
sell on e-bay. Some computations are many times redundant. But it's good
enough, and the extensibility and interop advantage are worth more than
efficiency would be.


>
> now, why, exactly, would anyone consider doing rendering on the server?...
>

Ask the developers of Second Life ;).

They basically `stream` polygons and textures to the player, continuously,
improving the resolution of your view if you aren't moving too quickly.
Unfortunately, you have this continuous experience of it always being
somewhat awful during normal movement. (In general, that's what eventual
consistency is like, too.)


>
> ironically, all this leads to more MMOs using client-side physics, and
> more FPS games using server-side physics, with an MMO generally having a
> much bigger problem regarding cheating than an FPS.
>

If you ensure deterministic physics, it would be a lot easier to
transparently spot-check players for cheating. But I agree it is a very
difficult problem, unless you can control the player's hardware.


>
>> though Capt. Kirk's "I don't believe in the no win scenario" line comes
>> to mind
>>
>
Same here. ;)


> it is not clear that client-to-client would lead to necessarily all that
> much better handling of latency either, for that matter.
>

Client-to-client usually does improve latency since you skip an
intermediate communication step. There are exceptions to prove the rule,
though - e.g. if you have control over routing or can put servers between
the clients.

Regards,

Dave

-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread Miles Fidelman

David Barbour wrote:




it is not clear that client-to-client would lead to necessarily
all that much better handling of latency either, for that matter.


Client-to-client usually does improve latency since you skip an 
intermediate communication step. There are exceptions to prove the 
rule, though - e.g. if you have control over routing or can put 
servers between the clients.


Pretty much well established that the only thing that works fast enough 
for the sims I've worked with is IP multicast.  Strip all the latencies 
to the bone.


Miles




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/9/2012 10:53 AM, David Barbour wrote:



On Mon, Apr 9, 2012 at 8:25 AM, BGB > wrote:



Running on a cluster is very different between having all the
intelligence on the individual clients.  As far as I can tell,
MMOs by and large run most of the simulation on centralized
clusters (or at least within the vendor's cloud).  Military
sims do EVERYTHING on the clients - there are no central
machines, just the information distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise
and reliability wise.


There are some security and performance drawbacks. It would be easy to 
`cheat` any of the simulation protocols used by military sims. But 
there isn't much motive to do so; it isn't as though you win virtual 
items to sell on e-bay. Some computations are many times redundant. 
But it's good enough, and the extensibility and interop advantage are 
worth more than efficiency would be.


yeah, probably fair enough.

though, it could be like the "Instant Messaging" network, which allows 
to some extent for heterogeneous protocols (typically by bridging 
between the networks and protocols).





now, why, exactly, would anyone consider doing rendering on the
server?...


Ask the developers of Second Life ;).

They basically `stream` polygons and textures to the player, 
continuously, improving the resolution of your view if you aren't 
moving too quickly. Unfortunately, you have this continuous experience 
of it always being somewhat awful during normal movement. (In general, 
that's what eventual consistency is like, too.)


actually, to some extent, I was also considering the possibility of 
something like this, but I don't generally consider this "rendering" so 
much as "streaming".


a very naive strategy would be, say, doing it like HTTP and using 
ping/pong requests to grab things as they come into view.


better latency-wise is likely to use more of a "push-down" strategy, 
where the server would speculate what the client can potentially see and 
push down the relevant geometry.


in my case though, typically geometry is sent in terms of whole brushes 
or mesh objects, rather than individual polygons.


presumably, client-side caching could also be done...


functionally, this already exists to some extent in the form of the 
real-time mapping capabilities (which is currently handled by the client 
pushing the updated geometry back to the server).



maybe textures could be sent 2-stage, with the first stage maybe sending 
textures at 1/4 or 1/8 resolution, and then sending the full resolution 
texture later.


say, first the client receives a 64x64 or 128x128 texture, and later 
gets the 256x256 or 512x512 version (probably with a mechanism to avoid 
re-sending textures the player already has).


like, unlike on a web-page, the user doesn't need to endlessly 
re-download a generic grass or concrete texture...





ironically, all this leads to more MMOs using client-side physics,
and more FPS games using server-side physics, with an MMO
generally having a much bigger problem regarding cheating than an FPS.


If you ensure deterministic physics, it would be a lot easier to 
transparently spot-check players for cheating. But I agree it is a 
very difficult problem, unless you can control the player's hardware.


likely unworkable in practice.
more practical could be to perform "sanity checks", which will fail if 
something happens which couldn't reasonably occur.


better though, reliability-wise, could be to leave the server in control 
of most things where players would likely want to cheat:

general movement;
dealing damage;
keeping track of inventory and stats;
...

this way, tampering on the client end is only likely to impact the 
client and make things buggy/annoying, but doesn't actually compromise 
world integrity.





though Capt. Kirk's "I don't believe in the no win scenario"
line comes to mind


Same here. ;)


it is not clear that client-to-client would lead to necessarily
all that much better handling of latency either, for that matter.


Client-to-client usually does improve latency since you skip an 
intermediate communication step. There are exceptions to prove the 
rule, though - e.g. if you have control over routing or can put 
servers between the clients.


fair enough. the bigger issue then is likely working around NAT though, 
since typical broadband routers only work well for outgoing connections, 
but are poorly behaved for incoming connections.



I guess the main question is whether one is measuring strict 
client-to-client latency, or client-to-world latency.


client-to-client latency would mean how long before a client performs an 
action and another client sees the result of this action, which would 
presumably be at least: Ping1 + Ping2 + ~1/2 tick.


with a direct connection, it is likely to be a single ping m

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-11 Thread Josh Gargus

On Apr 8, 2012, at 7:31 PM, BGB wrote:

> now, why, exactly, would anyone consider doing rendering on the server?...


One reason might be to amortize the cost of  global illumination calculations.  
Since much of the computation is view-independent, a Really Big Server could 
compute this once per frame and use the results to render a frame from the 
viewpoint of each connected client.  Then, encode it with H.264 and send it 
downstream.  The total number of watts used could be much smaller, and the 
software architecture could be much simpler.

I suspect that this is what OnLive is aiming for... supporting existing 
PC/console games is an interim step as they try to boot-strap a platform with 
enough users to encourage game developers to make this leap.

Cheers,
Josh
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-12 Thread BGB

On 4/11/2012 11:14 PM, Josh Gargus wrote:

On Apr 8, 2012, at 7:31 PM, BGB wrote:


now, why, exactly, would anyone consider doing rendering on the server?...


One reason might be to amortize the cost of  global illumination calculations.  
Since much of the computation is view-independent, a Really Big Server could 
compute this once per frame and use the results to render a frame from the 
viewpoint of each connected client.  Then, encode it with H.264 and send it 
downstream.  The total number of watts used could be much smaller, and the 
software architecture could be much simpler.

I suspect that this is what OnLive is aiming for... supporting existing 
PC/console games is an interim step as they try to boot-strap a platform with 
enough users to encourage game developers to make this leap.


but, the bandwidth and latency requirements would be terrible...

nevermind that currently, AFAIK, no HW exists which can do full-scene 
global-illumination in real-time (at least using radiosity or similar), 
much less handle this *and* do all of the 3D rendering for a potentially 
arbitrarily large number of connected clients.


another problem is that there isn't much in the rendering process which 
can be aggregated between clients which isn't already done (between 
frames, or ahead-of-time) in current games.


in effect, the rendering costs at the datacenter are likely to scale 
linearly with the number of connected clients, rather than at some 
shallower curve.



much better I think is just following the current route:
getting client PCs to have much better HW, so that they can do their own 
localized lighting calculations (direct illumination can already be done 
in real-time, and global illumination can be done small-scale in real-time).


the cost at the datacenters is also likely to be much lower, since they 
need much less powerful servers, and have to spend much less money on 
electricity and bandwidth.


likewise, the total watts used tends to be fairly insignificant for an 
end user (except when operating on batteries), since PC power-use 
requirements are small vs, say, air-conditioners or refrigerators, 
whereas people running data-centers have to deal with the full brunt of 
the power-bill.


the power-use issue (for mobile devices) could, just as easily, be 
solved by some sort of much higher-capacity battery technology (say, a 
laptop or cell-phone battery which, somehow, had a capacity well into 
the kVA range...).


at this point, people wont really care much if, say, plugging in their 
cell-phone to recharge is drawing, say, several amps, given power is 
relatively cheap in the greater scheme of things (and, assuming 
migration away from fossil fuels, could likely still get considerably 
cheaper over time).


meanwhile, no obvious current/near-term technology is likely to make 
internet bandwidth considerably cheaper, or latency significantly lower, ...


even with fairly direct fiber-optic connections, long distance ping 
times are still likely to be an issue, and it is much harder to LERP 
video, so short of putting the servers in a geographically nearby 
location (like, in the same city as the user), or somehow bypassing the 
speed of light, it isn't all that likely that people are going to really 
much exceed (in general) about 50-100ms ping (with a world average of 
likely closer to about 400ms ping).


this would lead to a generally unsatisfying gaming experience, as there 
would be an obvious delay between attempting an action and the results 
of this action becoming visible (which, at least, with local rendering, 
the results of ping times can be partly glossed over). (video quality 
and framerate are currently also issues, but could improve over time as 
overall bandwidth improves).


to deliver a high quality experience with point-to-point video, likely a 
ping time of around 10-20ms would be needed, which could then compete 
with the frame-rates of locally rendered video. at a 15ms ping then 
results would be "immediately" visible with a 30Hz frame-rate (it 
wouldn't be obviously different from being locally rendered).



granted, this "could" change if people either manage to develop 
faster-than-light communication faster than they manage better GPUs 
and/or higher-capacity battery technology, or people become generally 
tolerate of the latencies involved.



granted, "hybrid" strategies could just as easily work:
a lot of "general visibility" is handled on the servers, and pushed down 
as video streams, with the actual rendering being done on the client 
(essentially streamed video-mapped textures).


by analogy, this would be sort of like if people could use YouTube 
videos as textures in a 3D scene.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-19 Thread Josh Gargus

On Apr 12, 2012, at 5:12 PM, BGB wrote:

> On 4/11/2012 11:14 PM, Josh Gargus wrote:
>> On Apr 8, 2012, at 7:31 PM, BGB wrote:
>> 
>>> now, why, exactly, would anyone consider doing rendering on the server?...
>> 
>> One reason might be to amortize the cost of  global illumination 
>> calculations.  Since much of the computation is view-independent, a Really 
>> Big Server could compute this once per frame and use the results to render a 
>> frame from the viewpoint of each connected client.  Then, encode it with 
>> H.264 and send it downstream.  The total number of watts used could be much 
>> smaller, and the software architecture could be much simpler.
>> 
>> I suspect that this is what OnLive is aiming for... supporting existing 
>> PC/console games is an interim step as they try to boot-strap a platform 
>> with enough users to encourage game developers to make this leap.
> 
> but, the bandwidth and latency requirements would be terrible...

What do you mean by terrible?  1MB/s is quite good quality video.  Depending on 
the type of game, up to 100ms of latency is OK.


> 
> nevermind that currently, AFAIK, no HW exists which can do full-scene 
> global-illumination in real-time (at least using radiosity or similar),

You somewhat contradict yourself below, when you argue that clients can already 
do small-scale real-time global illumination (no fair to argue that it's not 
computationally tractable on the server, but it can already be done on the 
client).

Also, Nvidia could churn out such hardware in one product cycle, if it saw a 
market for it.  Contrast this to the uncertainty of how long well have to wait 
for the hypothetical battery breakthrough that you mention below.


> much less handle this *and* do all of the 3D rendering for a potentially 
> arbitrarily large number of connected clients.

Just to be clear, I've been making an implicit assumption about these 
hypothetical ultra-realistic game worlds: that the number of FLOPs spent on 
physics/GI would be 1-2 orders of magnitude greater than the FLOPs to render 
the scene from a particular viewpoint.  If this is true, then it's not so 
expensive to render each additional client.  If it's false, then everything I'm 
saying is nonsense.


> another problem is that there isn't much in the rendering process which can 
> be aggregated between clients which isn't already done (between frames, or 
> ahead-of-time) in current games.

I'm explicitly not talking about current games.


> 
> in effect, the rendering costs at the datacenter are likely to scale linearly 
> with the number of connected clients, rather than at some shallower curve.

Asymptotically, yes it would be linear, except for the big chunk of 
global-illumination / physics simulation that could be amortized.  And the 
higher you push the fidelity of the rendering, the bigger this chunk to be 
amortized.


> 
> much better I think is just following the current route:
> getting client PCs to have much better HW, so that they can do their own 
> localized lighting calculations (direct illumination can already be done in 
> real-time, and global illumination can be done small-scale in real-time).

I understand, that's what you think :-)


> 
> the cost at the datacenters is also likely to be much lower, since they need 
> much less powerful servers, and have to spend much less money on electricity 
> and bandwidth.

Money spent on electricity and bandwidth is irrelevant, as long as there is a 
business model that generates revenue that grows (at least) linearly with 
resource usage.  I'm speculating that such a business model might be possible.


> 
> likewise, the total watts used tends to be fairly insignificant for an end 
> user (except when operating on batteries), since PC power-use requirements 
> are small vs, say, air-conditioners or refrigerators, whereas people running 
> data-centers have to deal with the full brunt of the power-bill.

See above.


> 
> the power-use issue (for mobile devices) could, just as easily, be solved by 
> some sort of much higher-capacity battery technology (say, a laptop or 
> cell-phone battery which, somehow, had a capacity well into the kVA range...).

It would have to be a huge breakthrough.  Desktop GPUs are still (at least) an 
order of magnitude too slow for this type of simulation, and they draw 200W.  
This is roughly 2 orders of magnitude greater than an iPad.  And then there's 
the question of heat dissipation.

It's still a good point.  I never meant to imply that a server-rendering 
video-streaming architecture is be-all-end-all-optimal, but your point brings 
this into clearer focus.


> 
> at this point, people wont really care much if, say, plugging in their 
> cell-phone to recharge is drawing, say, several amps, given power is 
> relatively cheap in the greater scheme of things (and, assuming migration 
> away from fossil fuels, could likely still get considerably cheaper over 
> time).
> 
> meanwhile, no obvious current/near