language_fan wrote:
Wed, 23 Sep 2009 10:43:53 -0400, Jeremie Pelletier thusly wrote:

You're right about concurrency being a different concept than threading,
  but I wouldn't give threading away for a pure concurrent model either.
I believe D is aiming at giving programmers a choice of the tools they
wish to use. I could see uses of both a concurrent model with message
passing and a threading model with shared data used at once in a
program.

The danger in too large a flexibility is that concurrency is not easy and it is getting incresingly complex. You need to be extraordinary good at manually managing all concurrent use of data. If I may predict something that is going to happen, it is that there will be high level models that avoid many low level pitfalls. These models will not provide 100% efficiency, but they are getting faster and faster, without compromizing the safety aspect. This already happened with memory allocation (manual vs garbage collection - in common applications, but not in special cases). Before that we gave some of the error detection capabilities to the compiler (e.g. we do not write array bounds checks ourselves anymore). And optimizations (e.g. register allocation). You may disagree, but I find it much more pleasant to find that the application does never crash even though it works 15% slower than an optimal C++ code would.

15% slower is an extreme performance hit. I agree that code safety is useful and I use this model all the time for initialization and other code which isn't real time, but 15% takes away a lot of the application's responsiveness, if you have 50 such applications running on your system you just spent $1000 more in hardware to get the performance of entry level hardware with faster code.

If you wrote a real time renderer for example with that 15% hit, you get a very noticeable difference in framerate. Not to mention standard GUIs to be laggy on slower machines (just MSN messenger runs to a crawl on my old computer, yet it can run the first UT which does way more operations per second, because there's no performance hit by safer code bloat).

Shared data is always faster than message passing so you could
implement real time code with it, and use messsage passing for other
parts such as delegating GUI messages from the main loop.

Shared data is maybe faster on shared memory machines, but this is not a fact in the general case.
>
Shared data being harder to manage than message passing does not make it
a bad thing, it just means you can have two different models for two
different usages.

Shared data is not a single homogenous model. Things like transactional memory work in shared memory systems, but still they have different semantics.

Yes, I am well aware of that, I don't want to favor any particular design, I prefer to learn the semantics of a few of them and implement them all. Then I get the choice of the model to use depending on my needs.

Take search algorithm for example, you can pick between a binary search, b-tree r-tree, quadtree, octree, bsp, and a bunch of others along with dozens of variants of the ones mentionned depending on the data you're working with.

Its the same for concurrency, I can think of vector processing, functional calls, STM, message passing and shared memory off the top of my head. All being valid models with each their pros and cons, together forming a complete all-around solution.

Jeremie

Reply via email to