On Wednesday, 12 November 2014 at 08:55:30 UTC, deadalnix wrote:
I'm sorry to be blunt, but there is nothing actionable in your comment. You are just throwing more and more into the pot until nobody know what there is in. But ultimately, the crux of the problem is the thing quoted above.

My point is that you are making too many assumptions about both
applications and hardware.

2. The transactional memory thing is completely orthogonal to the subject at hand so, as the details of implementation of modern chip, this doesn't belong here. In addition, the whole CPU industry is backpedaling on the transactional memory concept. That is awesome on the paper, but it didn't worked.

STM is used quite a bit. Hardware backed TM is used by IBM.

For many computationally intensive applications high levels of
parallelism is achieved using speculative computation. TM
supports that.

There is only 2 way to achieve good design. You remove useless things until there is obviously nothing wrong, or you add more and more until there is nothing obviously wrong. I won't follow you down the second road, so please stay on track.

Good design is achieved by understanding different patterns of
concurrency in applications and how it can reach peak performance
in the environment (hardware).

If D is locked to a narrow memory model then you can only reach
high performance on a subset of applications.

If D wants to support system level programming then it needs to taken an open approach to the memory model.

Reply via email to