On Sat, Nov 03, 2007 at 09:15:07PM +0100, Marc Lehmann wrote:
<snip>
> In that mail, I announced that I will work on the problems I encountered
> in libevent (many of which have been reported and discusssed earlier on
> this list). After analyzing libevent I decided that it wasn't fixable
> except by rewriting the core parts of it (the inability to have multiple
> watchers for the same file descriptor event turned out to be blocking for
> my applications, otherwise I wouldn't have started the effort in the first
> place...).

A good itch, indeed.

> The results look promising so far: I additionally implemented a libevent
> compatibility layer and benchmarked both libraries using the benchmark
> program provided by libevent: http://libev.schmorp.de/bench.html
> 
> Here is an incomplete list of what I changed and added (see the full
> list at http://cvs.schmorp.de/libev/README, or the cvs repository at
> http://cvs.schmorp.de/libev/):

Man. More pressure to rename my library from "libevnet" to something else ;)
 
<snip>
> * there is full support for fork, you can continue to use the event loop
>   in the parent and child (or just one of them), even with quirky backends
>   such as epoll.

Curious how you managed to do this. Are you checking the process PID on each
loop?

> * there are two types of timers, based on real time differences and wall
>   clock time (cron-like). timers can also be repeating and be reset at
>   almost no cost (for idle timeouts used by many network servers). time jumps
>   get detected reliably in both directions with or without a monotonic clock.

But then they're not truly "real-time", no?

<snip>
> * event watchers can be added and removed at any time (in libevent,
>   removing events that are pending can lead to crashes).

This is news to me. Can you give more detail, maybe with pointers to code?

> * different types of events use different watchers, so you don't have
>   to use an i/o event watcher for timeouts, and you can reset timers
>   seperately from other types of watchers. Also, watchers are much smaller
>   (even the libevent emulation watcher only has about 2/3 of the size of a
>   libevent watcher).

libevnet does this for I/O; timer is always set separately from read/write
events. (Point being, its using libevent.)

> * I added idle watchers, pid watchers and hook watchers into the event loop,
>   as is required for integration of other event-based libraries, without
>   having to force the use of some construct around event_loop.

Needing to do an operation on every loop is arguably very rare, and there's
not much burden in rolling your own. PID watchers, likewise... how many
spots in the code independently manage processes (as opposed to one unit
which can just catch SIGCHLD). Also, curious how/if you've considered Win32
environments.

> * the backends use a much simpler design. unlike in libevent, the code to
>   handle events is not duplicated for each backend, backends deal only
>   with file descriptor events and a single timeout value, everything else
>   is handled by the core, which also optimises state changes (the epoll
>   backend is 100 lines in libev, as opposed to >350 lines in libevent,
>   without suffering from its limitations).

libevnet optimizes state changes. Logically every I/O request is single-shot
(which is more forgiving to user code), but it actually sets EV_PERSIST and
delays libevent bookkeeping until the [libevnet bufio] callback returns. If
the user code submits another I/O op from its callback (highly likely) then
the event is left unchanged. It's still re-entrant safe because it can
detect further activity up the call chain using some stack message passing
bits (instead of reference counting because I also use mem pools, but I
digress). Again, point being this can be done using libevent as-is.

> As for compatibility, the actual libev api is very different to the
> libevent API (although the design is similar), but there is a emulation
> layer with a corresponding event.h file that supports the event library
> (but no evbuffer, evnds, evhttp etc.).

Well... if you can persuade me of the utility then this Christmas I might
want to investigate writing an evdns-like component. See the "lookup"
component of libevnet. There are lots of nice things I need in a DNS
resolver that evdns and others are incapable of handling. And I've also
written more HTTP, RTSP, and SOCKS5 parsers than I can remember.

<snip>
> The "obvious" plan would be to take the evhttp etc. code from libevent and
> paste it in to libev, making libev a complete replacement for libevent
> with an optional new API. The catch is, I'd like to avoid this, because I
> am not prepared to maintain yet another library, and I am not keen on
> replicating the configure and portability work that went into libevent so
> far.

If you ask me, it would prove more fortuitous to re-write the DNS and HTTP
components then to replace libevent. Reason being because it would be hard
to substantively improve on DNS/HTTP without altering the API, whereas
clearly its feasible to improve libevent under the hood without altering the
existing API, and then building your new features on top of this. That's
sort of what I did with libevnet, by adding buffered I/O, DNS, and thread
management API atop libevnet.

> 
> So, would there be an interest in replacing the "core" event part of
> libevent with the libev code? If yes, there are a number of issues to
> solve, and here is how I would solve them:

Win32 support is important to me, unfortuantely. As it likely is to others.
libevent has a very, very large installed base of users.

> * libev only supports select and epoll. Adding poll would be trivial for me,
>   and adding dev/poll and kqueue support would be easy, except that I don't
>   want to set-up some bsd machines just for that. I would, however, opt to
>   write kqueue and /dev/poll backends "dry" and let somebody else do the
>   actual porting stuff to the then-existing backends.

I always thought it would be easier to just create kqueue wrappers around
epoll, poll, select, et al, and then build a library on that. Once you start
adding things like PID events, etc, its at least worth some thought.

> * libev only supports one "event base". The reason is that I think a
>   leader/ follower pattern together with lazy I/O would beat any multiple
>   event loop implementation, and the fact that each time I see some
>   software module using its own copy of some mainloop meant that it
>   doesn't work together with other such modules :) Still, if integration
>   should be achieved, this is no insurmountable obstacle :->

So you (1) want to support multiple destination events for the same source
but (2) let multiple threads signal discrete events? That sounds like an
invitation for even more trouble. Now you've got mutexes littered all over
the place. That's my version of a nightmare. And if you group discrete
events into the same thread, then you haven't solved the CPU workload
problem.

I too prefer a single event base. The obvious problem being when you
actually have CPU intensive work, say an MPEG streaming service--lots of
I/O--which also needs to do intermedate AV processing--lots of CPU. But now
systems such as Linux are adding semaphore/mutex events, so its economical
and easy to independently coordinate worker threads with the main event
loop. This lets developers keep tight reign on the flow of processing and
minimize concurrency headaches. You get the best of both worlds at their
peak efficiency. By putting thread management into the loop you've created
both advantages and disadvantages; in this instance symmetry isn't so
elegant. In other words, arguably as much baggage has been added as problem
solving utility.

Though, I think we've already had this debate... so... I'll just shut-up ;)

_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to