On Tue, Mar 16, 2010 at 9:51 AM, Christophe Meessen
<christo...@meessen.net> wrote:
> Brandon Black a écrit :
>>
>> The thing we all seem to agree on is that eventloops
>> beat threads within one process, but the thing we disagree on is
>> whether it's better to have the top-level processes be processes or
>> threads.
>>
>
> I'm really not so sure about the former. The ICE system developers
>  [http://www.zeroc.com] which is a CORBA like application made some
> benchmarks and concluded that the one thread per socket is the most
> efficient. It is also used in one the most efficient CORBA implementation
> omniORB [http://omniorb.sourceforge.net/].
>
> This is probably also because the application can't be easily turned into an
> event loop program because the "callbacks" may have a long execution time.
> These would have to be turned into state machines using the timer to go from
> one state to the other. This is weird and doesn't seem at all more efficient
> than a plain basic thread. Users would dislike it.
> My impression is that the discussion is biased by a particular use case
> pattern in mind and a focus nearly exclusive on performance.

Well yes if their code isn't well-structured for event loops, then
threads will work better :)

The "thread per socket" thing is something I've run into as well
though, at least on Linux, regardless of whether the threads are
threads or processes.  What that boils down to is that Linux has a
serializing lock on each socket.  Normally most people are dealing
with TCP sessions, and one "socket" is a serial TCP session anyways,
and so this isn't a practical concern.  However, with small UDP
transactions (think designs like DNS servers) involving a single
request packet and a single reply packet, this lock on the server's
listening socket becomes a limiting factor.  You'd like to scale up by
putting several threads of execution behind a single socket, but it
just doesn't work because of the socket locking.

So what you end up doing is spawning a thread/process per UDP socket,
and having something else loadbalance all your requests from the
official public port number to the several ports your application
listens on (Linux's ipvsadm can do this in software right on the same
host).  There are NUMA considerations in this problem too, concerning
where you place the processes and where you place your NICs and how
the IRQs get routed, etc.  To some degree the kernel autobalances this
stuff, but libnuma and/or numactl (or other similar stuff) are handy
too.  I ran into this writing an actual DNS server, but while
researching the socket scaling thing I came across a reference from
the facebook guys facing the same problem with UDP-based memcached not
scaling up, detailed here:
http://www.facebook.com/note.php?note_id=39391378919 .  Sounds like
they found the kernel-level issue and patched around it locally, but
never merged upstream?

_______________________________________________
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev

Reply via email to