Victor wrote:
Hi Niels,

On Monday 06 August 2007, you wrote:
I would like to figure out ways in which libevent can be more thread
friendly without requiring everyone to use threads.   So, thread
specific store for the event base seems like a good idea and I would
certainly appreciate to see patches.

Just a quick thought:

What about something simple like registering mutex lock/unlock
callbacks? If there is registered mutex callbacks, libevent calls it before and after making changes on a event base.

Something like:
event_base_register_lock( event_base , myMutexlock  );
event_base_register_unlock( event_base , myMutexunlock );

Ugh, no!

If you were to share event bases between threads, this would not be adequate. Locks must be used around *all accesses* to a shared resources, not just *writes*. And if thread A alters an event base while thread B is dispatching on it, there needs to be a mechanism for thread A to wake thread B, unit tests of add/remove cases, etc.

Secondly, it hasn't really been demonstrated that it's desirable to share event bases between threads (vs. having one per thread and some sort of balancing scheme). Given the added complexity to libevent, I don't think this case should be considered until someone gives a compelling reason. I've been meaning forever to do some benchmarks of different approaches, but...well...it hasn't happened, and I don't think a miracle is likely to happen in the next couple weeks. Too many things I want to do, too little time...

With regards to Mark Heily's suggestion (current_base per thread), I would rather have no current_base at all. Specifically, I'd like event_set() or an event_set() workalike that does not do "ev->ev_base = current_base". This would ensure that if the caller forgets to call event_base_set(), the code will fail in an obvious way. I hate subtle failures that can creep in later.

Best regards,
Scott
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to