[Libevent-users] Re: Passing data between event loops in multithreaded apps

2006-10-11 Thread Steven Grimm

Thus far I don't have any documentation however I intend to add some
doxygen-based documentation in the future.  I'll be particularly
motivated to write documentation if someone expresses interest in
reading it ;).


This seems interesting; I'd certainly like to read some documentation.

-Steve


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] How does libevent deal with more events than a particular syscall can handle?

2006-11-18 Thread Steven Grimm
It avoids using select() unless there's absolutely no other choice, in 
part because of the artificial limits of select() but mostly because 
select() is inefficient for large numbers of monitored file descriptors.


Most of the point of libevent is that it's a generic wrapper around 
OS-specific event notification mechanisms that don't have the kinds of 
problems select() has. It will use the best available mechanism on 
whatever platform you're on. I can't say anything about Win32 in 
particular, but most platforms at the very least support poll(), which 
shares some of select()'s efficiency problems but at least doesn't have 
a small compile-time limit on the number of pollable file descriptors. 
All the significant modern UNIX-ish platforms support much better 
mechanisms than poll(): kqueue on BSD, event ports on Solaris, epoll on 
Linux, etc. If you're coding to libevent's API you don't have to worry 
about the details of the underlying notification mechanism.


-Steve


Roger Clark wrote:

For instance, the maximum number of fds usable by select() on Win32
(or other platforms) is low compared to the number of potential
connections needed by a high-throughput HTTP server.

Does libevent call the dispatcher multiple times on different sets of
events? How does the design of the library compensate for this?



___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] new member

2007-01-12 Thread Steven Grimm
As the guy who added thread support to memcached, I feel qualified to 
answer this one:


What libevent doesn't support is sharing a libevent instance across 
threads. It works just fine to use libevent in a multithreaded process 
where only one thread is making libevent calls.


What also works, and this is what I did in memcached, is to use multiple 
instances of libevent. Each call to event_init() will give you back an 
"event base," which you can think of as a handle to a libevent instance. 
(That's documented in the current manual page, but NOT in the older 
manpage on the libevent home page.) You can have multiple threads each 
with its own event base and everything works fine, though you probably 
don't want to install signal handlers on more than one thread.


In the case of memcached, I allocate one event base per thread at 
startup time. One thread handles the TCP listen socket; when a new 
request comes in, it calls accept() then hands the file descriptor to 
another thread to handle from that point on -- that is, each client is 
bound to a particular thread. All you have to do is call 
event_base_set() after you call event_set() and before you call event_add().


Unfortunately, you pretty much have to use pipe() to communicate between 
libevent threads, That's a limitation, and honestly it's a pretty big 
one: it makes a master/worker thread architecture, where one thread 
handles all the I/O, much less efficient than you'd like. My initial 
implementation in memcached used an architecture like that and it chewed 
lots of CPU time. That's not really libevent's fault -- no UNIX-ish 
system I'm aware of has an equivalent to the Windows 
WaitForMultipleObjects API that allows you to wake up on semaphores / 
condition variables and on input from the network. Without that, any 
solution is going to end up using pipes (or maybe signals, which have 
their own set of issues in a multithreaded context) to wake up the 
libevent threads.


-Steve


Glen Hein wrote:


Hello,

 

I am doing research for a project that may follow an event-driven 
multi-threaded model with heavy disk and network I/O.  The platform 
will be FreeBSD and Linux.  Overall, we are seeking a C10K solution. 

 

I started reading through the libevent source and noticed several 
comments about not being thread-safe.  However, I have seen references 
to other projects that are multi-threaded and use libevent.  I need to 
explore libevent and determine if it is viable for my project.  Have 
there been discussions on threading issues in the mailing list 
archive?  Is there another source of information on threads and 
libevent? 

 


Thanks,

Glen Hein

 




___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users
  


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] [Patch] Install autoconf header file as evconfig.h

2007-02-16 Thread Steven Grimm

Dave Gotwisner wrote:
I personally hate the proliferation of typedefs.  I have seen u8, U8, 
u_int8, uint8, and many others that all express the same thing. 
(similarly for 16, 32, and 64 bit sizes).


The lack of a common standard is the problem, IMO, not the existence of 
typedefs per se. The underlying problem is that C doesn't provide any 
built-in portable way of saying you want a data type exactly THIS big, 
short of resorting to structs with bit-counted fields. None of the 
built-in types are required to be a particular size, even if they happen 
to have settled down on particular sizes on present-day architectures. I 
would much rather waste one line of header file somewhere doing a 
seemingly redundant typedef than have my code break on some future 
128-bit machine whose "long" is 64 bits and "long long" is 128. (Heck, 
forget "future" -- such machines exist today!) That would be perfectly 
legal for a C compiler to do. "uint32_t" is much more precise and 
unambiguous than "unsigned long" -- I read that and have no doubt how 
big the author expected the data type to be.


-Steve

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] [Patch] Install autoconf header file as evconfig.h

2007-02-16 Thread Steven Grimm

Dave Gotwisner wrote:
Using your example, if you had a 8, 16, 64, and 128 bit native types, 
and you wanted an EXACTLY 32 bit type, how would it work?  Given ANSI 
only specifies char, short, long, and long long, when we support 256 
bit native types, and you still need the smaller ones (or everything 
written to date will break), how would you get the fifth type?


You wouldn't, and your application would fail to build because it's not 
compatible with that architecture -- which IMO is a better situation 
than it building just fine (because you used "long") and failing at 
runtime because the code expected a particular length and got something 
different. If the failure is in an infrequently-used part of the code 
you might not discover it until pretty far down the road. Using typedefs 
gives you fail-fast behavior on systems that can't support the required 
types.


I have worked on projects where different people are familiar with 
different type conventions, so the code is written with two or more of 
the various typedefs, based upon who touches the code when.  Note that 
this is for commercial products not open source, with formal review 
processes, etc.


Me too, and I'm not defending that practice! If anyone sees redundancy 
like that in libevent, it should be stamped out immediately. But the 
fact that word-length typedefs can be used poorly isn't a reason to not 
use them at all; they do serve a useful purpose in some cases.


-Steve
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] event_init() and thread safety

2007-05-01 Thread Steven Grimm
If you're calling event_init() more than once, you never want to call 
event_set() and other such functions. You want to call event_set_base() 
and pass in the handle you get back from event_init(). Using the 
current_base versions of the event calls in a multithreaded app (where 
you are using libevent from multiple threads in parallel) is almost 
certainly going to break your app.


In the MT version of memcached I call event_init() in my startup thread 
and pass the handle to each of the worker threads I spawn. That way 
there's no chance of race conditions during initialization.


-Steve


harry wrote:

It's unsafe for two threads to call event_init() in parallel because
it sets and returns the global current_base variable.  If the first 
thread gets preempted by the second after calloc() but before the 
return, both calls will return the same value.


My question is whether one should be concerned about the use of 
current_base in other parts of the code.  For example, event_set() 
uses current_base while setting the priority.  Is there any chance 
that a preemption will occur in the middle of retrieving the value of

current_base resulting in an invalid pointer?

Also, I assume that each process that loads libevent as a shared 
object must get its own copy of current_base, right?


Harry
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users
  

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] sensible thread-safe signal handling proposal

2007-11-04 Thread Steven Grimm

On Nov 4, 2007, at 8:13 AM, Marc Lehmann wrote:
This would create additional loops (event_bases). The difference is  
that
these cannot handle signals (or child watchers) at all, with the  
default loop

being the only one to do signal handling.


This seems like a totally sane approach to me. Having multiple loops  
is a big performance win for some applications (e.g., memcached in  
multithreaded mode), so making the behavior a bit more consistent is a  
good thing.


Now if only there were a way to wake just one thread up when input  
arrives on a descriptor being monitored by multiple threads... But  
that isn't supported by any of the underlying poll mechanisms as far  
as I can tell.


-Steve

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] sensible thread-safe signal handling proposal

2007-11-04 Thread Steven Grimm


On Nov 4, 2007, at 3:03 PM, Adrian Chadd wrote:


On Sun, Nov 04, 2007, Steven Grimm wrote:


Now if only there were a way to wake just one thread up when input
arrives on a descriptor being monitored by multiple threads... But
that isn't supported by any of the underlying poll mechanisms as far
as I can tell.


Would this be for listen sockets, or for general read/write IO on an  
FD?


Specifically for a mixed TCP- and UDP-based protocol where any thread  
is equally able to handle an incoming request on the UDP socket, but  
TCP sockets are bound to particular threads.


Unfortunately the vast majority of incoming requests are on the UDP  
socket, too many to handle on one thread.


Before anyone suggests it: a message-passing architecture (one thread  
reads the UDP socket and queues up work for other threads) gave me  
measurably higher request-handling latency than the current setup,  
which works but eats lots of system CPU time because all the threads  
wake up on each UDP packet. It makes sense: the current scheme  
involves fewer context switches for a given request (at least, on the  
thread that ends up handling it), and context switches aren't free.


Ideally I'd love a mode where I could say, "Only trigger one of the  
waiting epoll instances when this descriptor has input available."  
Sort of pthread_cond_signal() semantics, as opposed to the current  
pthread_cond_broadcast() semantics. (Yes, I'm aware that  
pthread_cond_signal() is not *guaranteed* to wake up only one waiting  
thread -- but in practice that's what it does.)


-Steve
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] sensible thread-safe signal handling proposal

2007-11-04 Thread Steven Grimm

On Nov 4, 2007, at 3:07 PM, Christopher Layne wrote:
The issue in itself is having multiple threads monitor the *same* fd  
via any
kind of wait mechanism. It's short circuiting application layers, so  
that a
thread (*any* thread in that pool) can immediately process new data.  
I think
it would be much more structured, less complex (i.e. better  
performance in
the long run anyways), and a cleaner design to have a set number (or  
even
1) thread handle the "controller" task of tending to new network  
events,
push them onto a per-connection PDU queue, or pre-process in some  
form or
fashion, condsig, and let previously mentioned thread pool handle it  
in an

ordered fashion.


You've just pretty accurately described my initial implementation of  
thread support in memcached. It worked, but it was both more CPU- 
intensive and had higher response latency (yes, I actually measured  
it) than the model I'm using now. The only practical downside of my  
current implementation is that when there is only one UDP packet  
waiting to be processed, some CPU time is wasted on the threads that  
don't end up winning the race to read it. But those threads were idle  
at that instant anyway (or they wouldn't have been in a position to  
wake up) so, according to my benchmarking, there doesn't turn out to  
be an impact on latency. And though I am wasting CPU cycles, my total  
CPU consumption still ends up being lower than passing messages around  
between threads.


It wasn't what I expected; I was fully confident at first that the  
thread-pool, work-queue model would be the way to go, since it's one  
I've implemented in many applications in the past. But the numbers  
said otherwise.


-Steve
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users