David Barbour wrote:

Your approach to parallelism strikes me as simplistic. Like saying Earth is in center of Solar system. Sun goes around Earth. It sounds simple. It's "easy to conceptualize". Oh, and it requires epicyclic orbits to account for every other planet. Doesn't sound so simple anymore. Like this, simplistic becomes a complexity multiplier in disguise.

You propose actor per object. It sounds simple to you, and "easy to conceptualize". But now programmers have challenges to control latency, support replay, testing, maintenance, verification, consistency. This is in addition to problems hand-waved through like line-of-sight and collision detection. It doesn't sound so simple anymore.

The whole point of architecture is to generate the overall outline of a system, to address a particular problem space within the constraints at hand. The KISS principle applies (along with "seek simplicity and distrust it"). If there isn't a degree of simplicity and elegance in an architecture, the architect hasn't done particularly good job.

In the past, limitations of hardware, languages, and run-time environments have dictated against taking parallel (or more accurately, concurrent) approaches to problems, even when massive concurrency is the best mapping onto the problem domain - resulting in very ugly code.

Yes, there are additional problems introduced by modeling a problem as massively concurrent - and those are areas that I think are areas for fruitful research. In particular, re. the ones you cite:

- control latency, support replay, testing, maintenance, verification: these are nothing new at the systems level (think about either all the different things that run on a common server, or about all the things that go on in a distributed system such as the federated collection of SMTP servers that we're relying on right now)

- consistency: is not your message "Avoid the Concurrency Trap by Embracing Non-Determinism?" -- is not a key question: what does it mean to "embrace non-determinism" and how to design systems in an inherently indeterminate environment? (more below)

- now line-of-sight and collision detection, which are more specific to the simulation domain, are interesting in two regards:

-- collision detection (and weapons effects) are easy if you allow actors to calculate to determine "I'm hit," not so easy if you want independent verification by a referee or a physical environment model - the latter pretty much requires some kind of serialization, and the question becomes how

-- line-of-sight calculations are the bane of simulators - right now, the practice is for each entity to do it's own line of sight calculations (doesn't matter if it's an object that is invoked by a control thread, or an asynchronous actor) - each entity takes a look around (scans a database) to determine who it can see (and be seen by), who it can't, what's blocking it's view of other objects, etc. -- very compute intensive, and where coders spend a LOT of time optimizing (a CGF has to do this 20 times a second or more, GIS systems tend to take 30 seconds to several minutes to do the same thing -- when I was in the simulation business, I sat in several rather amusing meetings, watching coders from a well-known GIS firm, as their jaws dropped when told how fast our stuff did line-of-sight calucations). I expect that there are some serious efficiencies that can be gained by performing LOS calculations from a global perspective, and that these can benefit from massive parallelism - I expect there's work on ray tracing and rendering that applies - but that gets pretty far afield from my own experience.)


The old sequential model, or even the pipeline technique I suggest, do not contradict the known, working structure for consistency.

But is consistency the issue at hand?

This line of conversation goes back to a comment that the limits to exploiting parallelism come down to people thinking sequentially, and inherent complexity of designing parallel algorithms. I argue that quite a few problems are more easily viewed through the lens of concurrency - using network protocols and military simulation as examples that I'm personally familiar with.

You seem to be making the case for sequential techniques that maintain consistency. But is that really the question? This entire thread started with a posting about a paper you were giving on Project Renaissance - that contained two points that stood out to me:

"If we cannot skirt Amdahl’s Law, the last 900 cores will do us no good whatsoever. What does this mean? We cannot afford even tiny amounts of serialization."

"Avoid the Concurrency Trap by Embracing Non-Determinism?" (actually not from the post, but from the Project Renaissance home page)

In this, I think we're in violet agreement - the key to taking advantage of parallelism is to "embrace non-determinism."

In this context, I've been enjoying Carl Hewitt's recent writings about indeterminacy in computing. If I might paraphrase a bit, isn't the point that 'complex computing systems are inherently and always indeterminate, let's just accept this, not try to force consistency where it can't be forced, and get on with finding ways to solve problems in ways that work in an indeterminate environment.'

Which comes back to my original comment that there are broad classes of problems that are more readily addressed through the lens of massive concurrency (as a first-order architectural view). And that new hardware advances (multi-core architectures, graphic processors), and language/run-time models (actors, Erlang-like massive concurrency), now allow us to architect systems around massive concurrency (when the model fits the problem).

And, returning to this context:


    "... For huge classes of problems - anything that's remotely
    transactional or event driven, simulation, gaming come to mind
    immediately - it's far easier to conceptualize as spawning a
    process than trying to serialize things. The stumbling block has
    always been context switching overhead. That problem goes away as
    your hardware becomes massively parallel. "

    Are you arguing that:
    a) such problems are NOT easier to conceptualize as parallel and
    asynchronous, or,
    b) parallelism is NOT removing obstacles to taking actor-like
    approaches to these classes of problems, or
    c) something else?


I would argue all three.
Ahh... then I would counter that:

a) you selectively conceptualize only part of the system - an idealized happy path. It is much more difficult to conceptualize your whole system - i.e. all those sad paths you created but ignored. Many simulators have collision detection, soft real-time latency constraints, and consistency requirements. It is not easy to conceptualize how your system achieves these.

In this one, I write primarily from personal experience and observation. There are a huge class of systems that are inherently concurrent, and inherently not serializeable. Pretty much any distributed system comes to mind - email and transaction processing come to mind. I happen to think that simulators fall into this class - and in this regard there's an existence proof:

- Today's simulators are built both ways:

-- CGFs and SAFs (multiple entities simulated on a single box) - generally written with an object-oriented paradigm in C++ or Java, highly optimized for performance, with code that is incredibly hard to follow, and turns out to be rather brittle

-- networked simulations (e.g. F16 man-in-the-loop simulators linked by network) are inherently independent processes, linked by networks that have all kinds of indeterminancies vis-a-vis packet delays, packet delivery order, packet loss (you pretty much have to use multi-cast UDP, and packet loss, or you can't keep up with real-time simulation - and the pilots tend to throw up all over the simulators if the timing is off - sensitive thing the human vestibular system) ---- a much simpler architecture, systems that are much easier to follow

Very different run-time environments, very different system architectures. Both work.

Personally, I find networked simulators to be a lot easier to conceptualize than today's CGFs -- in one case, adding a new entity (say a plane) to a simulation = adding a new box that has a clean interface. In the other, it involves adding a new object, and having to understand that there's all kinds of behind-the-scenes magic going on, as multiple control threads wind their way through all the objects in the system. One is a clean mapping between the problem space, the other is just ugly.

Yes, as noted above, serious problems remain - but the question is about serial vs. parallel approaches are more tractable at the architectural level.

b) parallelism is not concurrency; it does not suggest actor-like approaches. Pipeline and data parallelism are well proven alternatives used in real practice. There are many others, which I have mentioned before.

Fair point. If we limit ourselves to a discussion of pipelines and data parallelism, I'll concede that they do not necessarily lead to cleaner conceptual mappings between problems and systems architectures. In fact, for the examples I've been talking about, my sense is that a pipelined approach to simulation is not particularly easier to comprehend than current approaches - though it might take better advantage of large numbers of processing cores. In the case of email, I can't even begin to think about applying synchronous parallelism to messages flowing a federation of mail servers.

On the other hand, if we look at the larger question of "skirting Amdah's law" in an environment with lots of processing cores - certainly within some definitions of "parallelism" - then actor-like massive concurrency approaches are certainly in bounds, and the availability of more cores certainly allows for running more actors without running into resource conflicts.
c) performance - context-switching overhead - isn't the most important stumbling block. consistency, correctness, complexity are each more important.

Ahh... here's I'll through it back to the question of architectures and design patterns that assume inconsistency as the norm (biological metaphors if you will). And maybe add a touch of protocol layering techniques (IP packets are inherently unreliable and probabilistic in behavior, we layer TCP on top of it to provide reliable connections. In other cases - like VoIP and video streaming - we can't go back and retransmit, so we either forget about lost packets, or use forward-error-correcting codes).


Like you, I believe we can achieve parallel designs while improving simplicity. But I think I will eschew turning tanks into actors.

Agreed on the first, not, obviously on the second.






--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to