BGB wrote:
On 4/4/2012 5:26 PM, Miles Fidelman wrote:
BGB wrote:
Not so sure. Probably similar levels of complexity between a military sim. and, say, World of Warcraft. Fidelity to real-world behavior is more important, and network latency matters for the extreme real-time stuff (e.g., networked dogfights at Mach 2), but other than that, IP networks, gaming class PCs at the endpoints, serious graphics processors. Also more of a need for interoperability - as there are lots of different simulations, plugged together into lots of different exercises and training scenarios - vs. a MMORPG controlled by a single company.


ok, so basically a heterogeneous MMO. and distributed


well, yes, but I am not entirely sure how many non-distributed (single server) MMO's there are in the first place.

presumably, the world has to be split between multiple servers to deal with all of the users.

some older MMOs had "shards", where users on one server wouldn't be able to see what users on a different server were doing, but this is AFAIK generally not really considered acceptable in current MMOs (hence why the world would be divided up into "areas" or "regions" instead, presumably with some sort of load-balancing and similar).

unless of course, this is operating under a different assumption of what a distributed-system is than one which allows a load-balanced client/server architecture.

Running on a cluster is very different between having all the intelligence on the individual clients. As far as I can tell, MMOs by and large run most of the simulation on centralized clusters (or at least within the vendor's cloud). Military sims do EVERYTHING on the clients - there are no central machines, just the information distribution protocol layer.



reading some stuff (an overview for the DIS protocol, ...), it seems that the "level of abstraction" is in some ways a bit higher (than game protocols I am familiar with), for example, it will indicate the "entity type" in the protocol, rather than, say, the name of, its 3D model.
Yes. The basic idea is that a local simulator - say a tank, or an airframe - maintains a local environment model (local image generation and position models maintained by dead reckoning) - what goes across the network are changes to it's velocity vector, and weapon fire events. The intent is to minimize the amount of data that has to be sent across the net, and to maintain speed of image generation by doing rendering locally.


now, why, exactly, would anyone consider doing rendering on the server?...

Well, render might be the wrong term here. Think more about map tiling. When you do map applications, the GIS server sends out map tiles. Similarly, at least some MMOs do most of the scene generation centrally. For that matter, think about moving around Google Earth in image mode - the data is still coming from Google servers.

The military simulators come from a legacy of flight simulators - VERY high resolution imagery, very fast movement. Before the simulation starts, terrain data and imagery are distributed in advance - every simulator has all the data needed to generate an out-the-window view, and to do terrain calculations (e.g., line-of-sight) locally.

ironically, all this leads to more MMOs using client-side physics, and more FPS games using server-side physics, with an MMO generally having a much bigger problem regarding cheating than an FPS.

For the military stuff, it all comes down to compute load and network bandwidth/latency considerations - you simply can't move enough data around, quickly enough to support high-res. out-the-window imagery for a pilot pulling a 2g turn. Hence you have to do all that locally. Cheating is less of an issue, since these are generally highly managed scenarios conducted as training exercises. What's more of an issue is if the software in one sim. draws different conclusions than the software in an other sim. (e.g., two planes in a dogfight, each concluding that it shot down the other one) - that's usually the result of a design bug rather than cheating (though Capt. Kirk's "I don't believe in the no win scenario" line comes to mind).


There's been a LOT of work over the years, in the field of distributed simulation. It's ALL about scaling, and most of the issues have to do with time-critical, cpu-intensive calcuations.


possibly, but I meant in terms of the scalability of using load-balanced servers (divided by area) and server-to-server message passing.

Nope. Network latencies and bandwidth are the issue. Just a little bit of jigger in the timing and pilots tend to hurl all over the simulators. We're talking about repainting a high-res. display between 20 to 40 times per second - you've got to drive that locally.



--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to