Re: [Libevent-users] Multithreaded behavior

2006-09-08 Thread William Ahern
On Fri, Sep 08, 2006 at 04:17:20PM -0700, Scott Lamb wrote:
> On Sep 8, 2006, at 12:13 PM, William Ahern wrote:
> >Ah. I was approaching it from another angle (one thread per event  
> >loop, and
> >the question being how to inject events and balance events into  
> >each event
> >loop).
> 
> Like the first thing I described? Have you actually done this and had  
> any luck with it? I suppose I could give it a go, at least for a  
> simple balancing scheme.

What I've done in an MTA is use descriptor passing. A master process listens
for connections and than sends them to it's children. I keep a tree of
children, ordered by number of outstanding connections. When a child loses a
connection it sends a message down a pipe so I can decrement it's connection
count. But that's sort of heavy weight; ultimately I liked the idea because
it provided robustness.

I would like to try this scheme in a single-process, multiple thread
environment. Each event loop could have an outstanding event listening on a
pipe (just like the signal pipe in libevent). The thread accepting
connections would select a thread to hand-off a connection and dump it into
a queue, then signal that thread's event pipe. Actually, this is how I had
assumed everybody else was doing it after I saw the event_base support go
into the library. And also the source of my "poll on a mutex", because using
a pipe for this also always seemed a little heavy weight. Though, I've never
benchmarked so I have no right ;)


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] Multithreaded behavior

2006-09-08 Thread Scott Lamb

On Sep 8, 2006, at 12:13 PM, William Ahern wrote:
Ah. I was approaching it from another angle (one thread per event  
loop, and
the question being how to inject events and balance events into  
each event

loop).


Like the first thing I described? Have you actually done this and had  
any luck with it? I suppose I could give it a go, at least for a  
simple balancing scheme.



I'd never want to touch the above scheme w/ a ten foot pole, mostly
because one of the greatest benefits I enjoy with event-oriented  
programming

is the lack of contention (i.e. not having to use a mutex everywhere).


I don't like the idea of losing that either, but there's no getting  
around having some sort of multiprocessing if you're going to use  
multiple processors, and recent hardware changes are making it harder  
to justify not doing so.




There are lots of scenarios where I might have multiple events  
outstanding,
all related to a single context (TCP proxying, for instance). In  
the above
design, I'd have to begin littering mutexes around my code.  
Relating those
oustanding events to a shared event loop implicitly frees me from  
having to

deal w/ this problem.


Yeah, that's certainly true for my SSL proxy. There'd need to be some  
sort of grouping of events, and I'm not sure how that should work if  
a new member is reported while the event is out for delivery. Held  
until next time, I guess.


--
Scott Lamb 


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] Multithreaded behavior

2006-09-08 Thread William Ahern
On Fri, Sep 08, 2006 at 10:18:11AM -0700, Scott Lamb wrote:
> At a high level, I think it would require the basic poll algorithm to  
> be:
> 
> lock
> loop:
> while there are events:
> dequeue one
> unlock
> handle it
> lock
> if someThreadPolling:
> condition wait
> else:
> someThreadPolling = true
> poll for events
> lock
> fire condition
> unlock
> 
> so whatever thread happens to notice that it's out of events does a  
> poll, and the others can see the results immediately. But I haven't  
> addressed actually putting new fds into the poll array. I'm not sure  
> what the behavior there has to be. I admit it - this approach is  
> complicated.

Ah. I was approaching it from another angle (one thread per event loop, and
the question being how to inject events and balance events into each event
loop). I'd never want to touch the above scheme w/ a ten foot pole, mostly
because one of the greatest benefits I enjoy with event-oriented programming
is the lack of contention (i.e. not having to use a mutex everywhere).

There are lots of scenarios where I might have multiple events outstanding,
all related to a single context (TCP proxying, for instance). In the above
design, I'd have to begin littering mutexes around my code. Relating those
oustanding events to a shared event loop implicitly frees me from having to
deal w/ this problem.

When I use threads the only mutexes I want are the ones protecting
malloc()/free(), and the handful of other similar global resources.

> Anyway, I'm not suggesting adopting this without actual proof that  
> it's better. I need to blow the dust off my benchmark tools, but I'm  
> willing to put the effort into trying things out if I hear ideas I like.

Fair enough.

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] Multithreaded behavior

2006-09-08 Thread Scott Lamb

On Sep 8, 2006, at 10:18 AM, Scott Lamb wrote:

lock
loop:
while there are events:
dequeue one
unlock
handle it
lock
if someThreadPolling:
condition wait
else:
someThreadPolling = true
poll for events
lock


Oops, here's where "someThreadPolling = false" should go.


fire condition
unlock


--
Scott Lamb 


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] Multithreaded behavior

2006-09-08 Thread Scott Lamb

On Sep 7, 2006, at 11:49 PM, William Ahern wrote:

On Thu, Sep 07, 2006 at 11:29:50PM -0700, Scott Lamb wrote:

I think libevent's current multithreaded behavior is not terribly
useful:

1. You can't safely share a single event_base among a pool of
threads. This is actually what I'd like to do with threads,
especially now that four-core systems are becoming cheap. (My SSL
proxy should be able to put those extra cores to use.)
It's...tricky...to get right, though.


Why would you ever want to do this? I mean, in one sense it could  
simplify
some multi-threaded designs. The complexity it adds, however,  
hardly seems
worth it compared to how simple this could be done on a per- 
application

basis using the existing API.


How would you do it with the existing API? The best I've got is to have:

(1) an "acceptor" thread which just works on the listen sockets and  
throws accepted sockets to other threads based on some heuristic

(2) the "worker" threads that actually handle connections

The acceptor would lock, throw something into the target's "hey, add  
this" queue, then send it a wakeup.


But there are a couple performance aspects that I don't like:

(1) There's really no guarantee the workers are equally busy.
(2) No connection can actually proceed without being transferred  
across threads.


Maybe (1) could be addressed with some sort of rebalancing  
scheme...but at that point, it might get as complicated as the scheme  
below, and each application would have to implement that complexity.


Actually, to get this right from both an aesthetic as well as  
efficiency
perspective would require, I think, libevent to be able to poll on  
both a

condition variable as well as traditional descriptor objects.


At a high level, I think it would require the basic poll algorithm to  
be:


lock
loop:
while there are events:
dequeue one
unlock
handle it
lock
if someThreadPolling:
condition wait
else:
someThreadPolling = true
poll for events
lock
fire condition
unlock

so whatever thread happens to notice that it's out of events does a  
poll, and the others can see the results immediately. But I haven't  
addressed actually putting new fds into the poll array. I'm not sure  
what the behavior there has to be. I admit it - this approach is  
complicated.


Anyway, I'm not suggesting adopting this without actual proof that  
it's better. I need to blow the dust off my benchmark tools, but I'm  
willing to put the effort into trying things out if I hear ideas I like.



Maybe the
underlying capability could feasibly be added to kqueue's  
Nonetheless

it's a pretty far fetched proposition.


Well, I'm definitely not the first person to have suggested handling  
events in multiple threads simultaneously. Take a look at:


* Java's nio API. I don't know if it's horribly complicated inside,  
and I haven't used it in this way, much less actually benchmarked it,  
but they have some discussion of concurrency in their API docs. From  
, take a look at  
java.nio.channels.Selector, "Concurrency" section.


* SEDA - http://www.eecs.harvard.edu/~mdw/proj/seda/

* Jeff Darcy's design notes - http://pl.atyp.us/content/tech/ 
servers.html


Best regards,
Scott

--
Scott Lamb 


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] Multithreaded behavior

2006-09-07 Thread William Ahern
On Thu, Sep 07, 2006 at 11:29:50PM -0700, Scott Lamb wrote:
> I think libevent's current multithreaded behavior is not terribly  
> useful:
> 
> 1. You can't safely share a single event_base among a pool of  
> threads. This is actually what I'd like to do with threads,  
> especially now that four-core systems are becoming cheap. (My SSL  
> proxy should be able to put those extra cores to use.)  
> It's...tricky...to get right, though.

Why would you ever want to do this? I mean, in one sense it could simplify
some multi-threaded designs. The complexity it adds, however, hardly seems
worth it compared to how simple this could be done on a per-application
basis using the existing API.

Actually, to get this right from both an aesthetic as well as efficiency
perspective would require, I think, libevent to be able to poll on both a
condition variable as well as traditional descriptor objects. Maybe the
underlying capability could feasibly be added to kqueue's Nonetheless
it's a pretty far fetched proposition.


> I'm not sure what the ideal thread behavior would be, though, much  
> less how to achieve it in a backward-compatible way. Opinions?
> 

Personally, ideal thread behavior is pretty close to the existing state of
things. Could you be more specific on the ideas you have which involve
threads sharing the same event object (disregarding my opening sarcasm ;)

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


[Libevent-users] Multithreaded behavior

2006-09-07 Thread Scott Lamb
I think libevent's current multithreaded behavior is not terribly  
useful:


1. You can't safely share a single event_base among a pool of  
threads. This is actually what I'd like to do with threads,  
especially now that four-core systems are becoming cheap. (My SSL  
proxy should be able to put those extra cores to use.)  
It's...tricky...to get right, though.


2. If you forget event_base_set on an event, it's associated with the  
latest base created. This will probably work most of the time. It'd  
be much less confusing if it consistently broke.


3. Each new base created leaks the existing ev_signal_pair descriptors.

4. Signals are delivered to whatever event loop happens to see them  
first.


5. It uses sigprocmask(), which has undefined behavior when threads  
are in use. [1]


6. You can't destroy an event_base.

I'm not sure what the ideal thread behavior would be, though, much  
less how to achieve it in a backward-compatible way. Opinions?


[1] http://www.opengroup.org/onlinepubs/007908799/xsh/sigprocmask.html

--
Scott Lamb 

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users