David Barbour wrote:

On Tue, Apr 3, 2012 at 5:17 PM, Miles Fidelman <mfidel...@meetinghouse.net <mailto:mfidel...@meetinghouse.net>> wrote:


        But there are good architectures that won't become spaghetti
        code in these circumstances. If you pipelined 2000 tank data
        objects through four processes each instant, for example (i.e.
        so tanks 1-100 are in the last process of four while 301-400
        are in the first) there would be some clear constraints on
        communication, and state could be updated only at transition
        from one instant to another.

    You're speaking theoretically, of course.  The systems I'm
    familiar with are fairly highly evolved and optimized.  They just
    start from the assumption that you can't spawn a process for every
    object.


I have written simulators before, though not for CGF. Pipelines work very well - they offer a low overheads due to batch processing, data parallelism within each stage, pipeline parallelism between stages, and can be both real-time and deterministic. Same principle as rendering a scene and running a bunch of vertex shaders each frame. Even if I have a system where context switching and process-per-tank is free, I would be disinclined to model things that way. It is more valuable to reason about consistency and robustness, to have determinism for maintenance and regression testing, to know the simulation will run the same etc.

You seem to be starting from the assumption that process per object is a good thing.

absolutely - I come from a networking background - you spawn a process for everything - it's conceptually simpler all around - and as far as I can tell, most things that complex systems are inherently parallel

having to serialize things that are inherently parallel (IMHO) only makes sense if you're forced to by constraints of the run-time environment


http://blog.incubaid.com/2012/03/28/the-game-of-distributed-systems-programming-which-level-are-you/



        You claim managing 2000 asynchronous processes is trivial. I
        think you'll just find different things to complain about
        after you have, say, 2000 actors processing 160000
        asynchronous messages per second (4 messages per actor * 20
        Hz, minimal for your example) plus managing consistent views
        and safe updates of a shared map or environment, or consistent
        snapshots for persistence.


    What makes you assume that every actor is doing anything all the
    time.  A lot of them are simply sitting in one place, others are
    moving but not interacting with anything.  The major events are
    weapons fire events, which are not happening at 20hz.


I interpreted your words to assume it. You said: "you have 4 main loops touch 2000 objects every simulation frame". To me this sounds like 4 loops each touching the same 2000 objects at 20 Hz = 4 * 2000 * 20 = 160k. I figured each loop was handling something different so as to avoid collisions - e.g. sensors, comms, navigation and obstacle avoidance, targeting. Did you mean something else? Anyhow, a touch likely corresponds to one or two messages. My estimate of 160k messages per second was, IMO, quite conservative.
That's what I meant. Each loop has to touch each object, every simulation cycle, to see if anything has happened that needs attention. It's been a while, but as I recall it's something like
1. loop one, update all lines of sight
2. loop two, update all sensors
3. loop three, calculate weapons effects
4. loop four, update positions
5. loop five, update graphic display
6. repeat ad nauseum

If I were building one of these, in say Erlang, if a tank is doing nothing, it simply sits and does nothing. If it's moving, maybe a heartbeat kicks off a recalculation every so often. Everything else is a matter of event-driven activities.

The sequential approach pretty much forces things down a path of touching everything, frequently, in a synchronous manner. Asynchronous, event-driven behavior is a LOT easier to conceptualize and code. (Except for line-of-sight calculations.)

By the way, as soon as you network simulators (at least military ones, not necessarily MMORPGs) it's very much a parallel, asynchronous world. Each simulator maintains its own world view, calculates what other entities are doing by dead reckoning, and the only things that travel across the net are deltas (aspect and velocity changes, weapons fire) - usually taking the form of multicast UDP packets (look up the DIS protocol sometime). It all just works. Not all that new either - dates back to BBN's SIMNET stuff back in the late 1980s.

FYI: It was having a number of our coders tell me that there's just too much context-switching to consider an asynchronous architecture, that led me to discover Erlang - one of the few (perhaps the only) environments that support massive concurrency.


    But this is far afield from the basic point:  Someone suggested
    that people thing sequentially, or that parallelism is more
    complicated than sequential architectures.  This is a very real
    case where that simply is not the case - sequential approaches to
    this problem space are inherently more complicated than parallel ones.


I think there are simple, parallel approaches. I know there are simplistic, parallel approaches.
Not to be impolite, but what point are you trying to make?

This branch of discussion started with someone's comment that:

"Even if there does turn out to be a simple and general way to do parallel programming, there'll always be tradeoffs weighing against it - energy usage and design complexity, to name two obvious ones."

To which my response was:

"... For huge classes of problems - anything that's remotely transactional or event driven, simulation, gaming come to mind immediately - it's far easier to conceptualize as spawning a process than trying to serialize things. The stumbling block has always been context switching overhead. That problem goes away as your hardware becomes massively parallel. "

Are you arguing that:
a) such problems are NOT easier to conceptualize as parallel and asynchronous, or, b) parallelism is NOT removing obstacles to taking actor-like approaches to these classes of problems, or
c) something else?

Miles Fidelman


--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to