Marc Lehmann wrote:
Keep in mind thatthe primary use for threads is improve context switching
times in single processor situations - event loops are usually far faster at
context switches.

No, I don't think so. (re: improve context switching times in single processor situations). Yes, context switch in a state machine is fastest, followed by coroutines,
and then threads. But you're limited to one core.
Now that multi cores become increasingly more common and scalability to
multiple cpus and even hosts is becoming more important, threads should be
avoided as they are not efficiently using those configs (again, they are a
single-cpu thing).
You keep saying this, but that doesn't make it true.
While there are exceptions (as always), in the majority of cases you will
not be able to beat event loops, especially whne using multiple processes,
as they use the given resources most efficiently.
Only if you don't block, which is frequently hard to ensure if you are
using third-party libraries for database access (or heavy crypto, or
calc-intensive code that is painful to step explicitly). In fact, all
those nasty business-related functions that cause us to build systems
in the first place.
Yes, threads were not meant for these kinds of workloads, and their
overheads are enourmous even on systems that are very fast (such as
GNU/Linux).
That is not necessarily so: a spinlock-protected FIFO is hardly an
enormous overhead, and in lightly loaded systems the added latency
of passing control is neither here nor there in most cases, while an
ability to spread load over multiple CPU cores is useful. The service
threads only need to stop and give up timeslice and wait for a wakeup
and reschedule if there is no queued work.  More grunt when you
need it, and a bit slower when you don't care since everything is
flying through anyway.

I'm not discounting multi-process architectures - but sometimes the
marshalling necessary to flatten data through shared memory (or pipe
it through the kernel) is painful and the added sharing is handy.  Use
a modern heap implementation if malloc/free is a bottleneck.

I'm also not claiming that threads are a silver bullet and will always
be faster - it depends how much computing you do in response to
a received network request. But writing them off bluntly the way
you have is crazy.

Leader/follower *can* make life difficult with races unless objects
that have callbacks to process are not available in the event loop
as its handed over to the new leader.
threads make it impossible to use multiple cpus independently, which
unfortunately is required for performance).
And the justification for this is what, precisely?  There are so many
counterexamples to this its hard to avoid them.  Many threaded systems do
bottleneck on contended state, but they'd bottleneck if they were processes
accessing state in shared memory too, and the inevitable sharing of the heap
data structures isn't nearly the problem it used to be. Even then, often the
threads will scale quite well on a quad core system unless the design is
really brain dead.

(Yeah, I know its pointless arguing with Marc, but sometimes you just have
to say: Marc is talking b*llocks)


_______________________________________________
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev

Reply via email to