On Friday, 5 June 2015 at 13:44:16 UTC, Dmitry Olshansky wrote:
If there is affinity and we assume that OS schedules threads on the same cores* then each core has it's cache loaded with (some of) stacks of its fibers. If we assume sharing fibers across all cores, then each core will have to cache stacks for all of fibers which is wasteful.

If you cannot control affinity then you can't take advantage of hyper-threading either? I need to think of this in terms of _smart_ scheduling and adaptive load balancing.

Moving fibers across threads have no effect on all of the above even if there is some truth.

In order to get benefits from hyper-threading you need pay close attention how you schedule, or you should turn it off.

There is simply no way to control what core executes which thread to begin with, this assignment is the OS territory.

If your OS is does not support hyper-threading level control you should turn it off...

The only good reason for not switching is that you lack
resources/know-how.

Reasons were presented, but there is nothing in your answer that at least acknowledges that.

No, there were no performance related reasons, only TLS (which is a questionable feature to begin with).

Then it's a good chance for you to prove your design by experimentation. That if we all accept concurrency issues with moving fibers that violate some language guarantees.

There is nothing to prove. You either perform worse or better than a carefully scheduled event-based solution in C++. You either perform worse or better than Go 1.5 in scheduling and GC.

However, doing well in externally designed and executed benchmarks on _language_ _features_ is good marketing (even if that 10-20% edge does not matter in real world applications).

Right now, neither concurrency or GC are really D language features, they are more like library/runtime features. That makes it difficult to excel in those areas. In languages like Go, Erlang and Pony concurrency is a language feature.

Reply via email to