On Thursday, 3 March 2016 at 17:31:59 UTC, Andrei Alexandrescu wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf

Andrei

A lot of data presented are kind of skewed. For instance, the synchronization costs across cores are done at 0% writes. It comes to no surprise that synchronizing read only data is cheap, but I don't think the author is justified in concluding that synchronization is cheap in the general case.

"new threads are created if all existing are busy but only up to MAX_INT threads" More likely only up to the point you run out of file handles.

You got to also note that context switches are measure on an opteron that has a 81 cycles iret (it is 330 on haswell) so that would explain why he find out they are way cheaper than expected. Opteron also have 2x64kb L1 cache, which probably reduce the cache trashing due to context switches at the cost of single threaded performances (which doesn't seems like a very good tradeoff for a CPU). In short, the CPU used is good for context switches, especially compared to modern intels.

Other result are not that surprising, like blocking IO being faster than non blocking one, the gain from async coming from multiplexing IO, not making every individual IO operation faster. Or, in server terms, it'll change your DB bound frontend to a CPU bound one and offload scaling on the DB. If you don't have another component to limiting your server, going async is not going to improves things (like if you are already CPU bound).

Reply via email to