> Maybe what I said makes less sense in the case of NIO vs blocking with
> threads - I've mainly been working with Intel Threading Building Blocks
> lately, where the cost of cache cooling is very real. For that reason (and
> the others mentioned - context switches and lock preemption), Intel
> Threading Building Blocks will try and run a single thread per core and load
> balance work amongst these.

I haven't used Building Blocks, but I certainly agree that running
exactly as many threads as cores is probably optimal under most
conditions (assuming cache contention doesn't interact in such a way
as to make it worse; e.g. you might see two threads going faster than
four and such under extreme conditions).

> would want to use the available cores. I'm just saying that having more
> threads than cores (or rather, more software threads than hardware threads)
> may hurt performance or scalability due to time slicing overheads. Obviously
> its more complicated than simply creating N worker threads for an N-core
> system though, since if any blocking IO is performed the cores are
> under-utilized.

Agreed.

> However, in an asynchronous server, (or, more importantly, in one where the
> number of threads do not exceed the number of hardware threads) it becomes
> much more likely that a request is processed to completion before it gets
> evicted from the cache (as long as care is taken to prevent false sharing
> with other, independent data which share the cache lines).

Agreed, but with the specific caveat that this is specifically under
circumstances where you are in fact trading latency for throughput. In
other words, this is true, but in any specific case where the asynch
design allowed you to complete where you would otherwise have context
switched, you are intrinsically violating your would-be timeslice,
thus having effects on latency resulting from other requests waiting
on your one long/expensive requests.

> isn't at all relevant to the discussion. Still, I am very interested to hear
> yours and everyone elses real world experiences.

I come from the perspective of first having written quite a lot of
multi-threaded C++ code  (over a few years) that did fairly complex
combinations of "CPU work" and I/O with other services. I am really
confident that the code I/we wrote would never have been completed in
even close to the same amount of time/resources if we had written
everything event-based. I cannot overstate this point enough...

During the last year I've been exposed so quite a lot of reactive code
(C++, Python twisted, some others), with the expected IMO pretty
extreme consequences for code maintainability and productivity (even
for people who's been writing such code for a long time and are
clearly used to it).

So I have an strong desire to avoid going event based if possible as a
default position.

In terms of scalability, that definitely mattered when I worked on the
mentioned multi-threaded code. It directly translated to hardware
costs in terms of what you had to buy because we had effectively an
infinite amount of work to be done in some areas (such as crawling the
web; you don't really run out of things to do because you can always
do things more often, better or faster). However, that experience is
at best anecdotal since no formal studies were done on multi-core
scalability; rather doubling cores meant it went "almost twice as
fast" - purely anecdotal, based on empirical observations during
development cycles.

On this topic I found it interesting reading about Google's concerns
with and improvements to the Linux kernel to support their use. I
couldn't find the article right now (I'm pretty sure it was on lwn),
but it strongly implied that Google definitely used production systems
with very many threads. I found that interesting since given Google's
scale, presumably runtime efficiency may be very highly valued
compared to extra development cost to get there. My hypothesis,
probably colored by confirmation bias, is that the difference in
effort in writing large complex systems in an event-based fashion is
simply too expensive to be worth it even at Google's scale - at least
in the general case. Their release of Go was unsurprising to me for
this reason :)

Has anyone here got experience with writing really complex systems
(big code bases, services talking to lots of other services, doing
non-trivial control flow etc) in event-based form? Any comments on how
it scales, in terms of development costs, as the size and complexity
of the system grows?

-- 
/ Peter Schuller

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to