On Tue, Apr 3, 2012 at 5:17 PM, Miles Fidelman <mfidel...@meetinghouse.net>
 wrote:

>
>> But there are good architectures that won't become spaghetti code in
>> these circumstances. If you pipelined 2000 tank data objects through four
>> processes each instant, for example (i.e. so tanks 1-100 are in the last
>> process of four while 301-400 are in the first) there would be some clear
>> constraints on communication, and state could be updated only at transition
>> from one instant to another.
>>
> You're speaking theoretically, of course.  The systems I'm familiar with
> are fairly highly evolved and optimized.  They just start from the
> assumption that you can't spawn a process for every object.


I have written simulators before, though not for CGF. Pipelines work very
well - they offer a low overheads due to batch processing, data parallelism
within each stage, pipeline parallelism between stages, and can be both
real-time and deterministic. Same principle as rendering a scene and
running a bunch of vertex shaders each frame.

Even if I have a system where context switching and process-per-tank is
free, I would be disinclined to model things that way. It is more valuable
to reason about consistency and robustness, to have determinism for
maintenance and regression testing, to know the simulation will run the
same etc.

You seem to be starting from the assumption that process per object is a
good thing.

http://blog.incubaid.com/2012/03/28/the-game-of-distributed-systems-programming-which-level-are-you/


>
>> You claim managing 2000 asynchronous processes is trivial. I think you'll
>> just find different things to complain about after you have, say, 2000
>> actors processing 160000 asynchronous messages per second (4 messages per
>> actor * 20 Hz, minimal for your example) plus managing consistent views and
>> safe updates of a shared map or environment, or consistent snapshots for
>> persistence.
>>
>
> What makes you assume that every actor is doing anything all the time.  A
> lot of them are simply sitting in one place, others are moving but not
> interacting with anything.  The major events are weapons fire events, which
> are not happening at 20hz.


I interpreted your words to assume it. You said: "you have 4 main loops
touch 2000 objects every simulation frame". To me this sounds like 4 loops
each touching the same 2000 objects at 20 Hz = 4 * 2000 * 20 = 160k. I
figured each loop was handling something different so as to avoid
collisions - e.g. sensors, comms, navigation and obstacle avoidance,
targeting. Did you mean something else? Anyhow, a touch likely corresponds
to one or two messages. My estimate of 160k messages per second was, IMO,
quite conservative.

I grant we could optimize cases when a tank is doesn't need certain
processing. But let's not start assuming that most active objects in our
simulation are just sitting still - that assumption often fails in practice.


>
> The difficult part of the job, and the one that causes a shared-nothing
> approach problematic, is line-of-sight calculations - who can see whom.
>  Ultimately that has to be solved.
>

Navigation, obstacle avoidance, convoying and formations, and a number of
other problems are also non-trivial.


>
> But this is far afield from the basic point:  Someone suggested that
> people thing sequentially, or that parallelism is more complicated than
> sequential architectures.  This is a very real case where that simply is
> not the case - sequential approaches to this problem space are inherently
> more complicated than parallel ones.


I think there are simple, parallel approaches. I know there are simplistic,
parallel approaches.

Regards,

Dave

-- 
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to