Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Marc Lehmann
On Mon, Nov 05, 2007 at 02:13:20PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> >Its higher than necessary, and more complex than neccessary.
> 
> Higher than necessary? I don't care; it's worth it.
> 
> More complex than necessary? Not really.

Well, I gave an example that is less complex and more efficient, the best of
all worlds. And fully binary-compatible.

> >responsible.
> 
> Erm, no. In the ENOMEM case, I know which fd you decided to kill, though
> not actually that you've killed it. ENOMEM is not a per-file descriptor
> problem.

The word "kill" is misleading, the fd is still there, alive and
kicking. But the event handler has been stopped.

> I'm looking at the Linux kernel source code now (core_sys_select), and 
> you are wrong on Linux as well:

Maybe differeing linux versions? I definitely get ENOMEM with gigabytes of
free memory whne exceeding the configured limit.

Of course, linux might *also* return ENOMEM for transient conditions.

> might be useful, but it's nothing the caller can't do on its own by 
> retrying the failed function.

The problem is that *all* callers will need to do this, and this is not
something I will ever require of all callers.

As a callback will solve your problem (the special case) and will make it
much easier for the general case, it definitely seems the better solution,
leading to better programs in general.

> which allowed attributes specified at eventloop creation time 
> (extensible without further API changes). One of them might have been
> 
> libname_eventloop_attr_setenomem(libname_eventloop_attr_t *, void 
> (*enomem_callback)(/*undecided*/), void *enomem_callback_arg);

Thats interesting, but I guess i will go for a simpler set-callback
function.

The question still remains under what conditions exactly to call it - *alloc
is obvious, but should there be a separate handler

> >>* any X11 application is screwed if you close its connection to the server
> >
> >Uhm, no? :)
> 
> Uhm, yes.

Repeating untruths doesn't make them any truer. You confuse the default
action of terminating your program with "being screwed", but that is
simply not true (even if libX11 makes this hard, other X libs such as the
more modern XCB don't).

Most X11 apps just exit, but thats not a law.

> Not necessarily. Servers do occasionally reload their listen sockets 
> (SIGHUP handlers); they just don't have logic to do so on demand from a 
> crazy library.

They still have to handle accept errors, though.

> >(There are userspace nfs daemons, but even if they use libevent, they are
> >far more broken and fragile than the standard nfs daemons in linux).
> 
> Please don't assume I'm a moron. 

I don't, but you keep making untrue statements that I correct. If you think I
assume you are a moron by correcting you, you could just stop making false
generic statements such as above and we will not run into any problems :)

However, let me repeat that I do not assume you are a moron, and it probably
isn't too nice to assime I did. Its certainly not healthy for our otherwise
very productive discussion that I would hate to lose :(

> would not have said this if I hadn't actually seen libevent-based NFS
> code that ships on real systems.

> Specifically, I'm talking about the version of rpc.idmapd running by 
> default on RHEL 5.0.

which actually is not a nfs daemon (just like portmap is not an nfs
daemon, its a specific service that cna optionally be used by the nfs
daemons),.


-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Scott Lamb

Marc Lehmann wrote:

On Sun, Nov 04, 2007 at 09:37:52PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:

There is actually a lot of code that relies on this just working, and the
only other event loop I know that has a problem with this is Tk.

Ugh, I'd argue that idiom is broken. But if the support's free, I guess
it doesn't matter.


Well, the idiom is how it works in most languages nowadays, so wether broken
or not its important to support.

The main problem is that with the current libevent implementation, failures
are completely silent.


I've used garbage-collected languages without this problem. It's not the 
language; it's the idiom.



This is true, but its an optional feature you don't have to use.  In case
you wonder, EV, the perl interface to libev, uses this feature.

It makes most sense when embedding, of course (not all the world is an .so
:).

Hmm, in your Perl example, I wouldn't rule out you wanting to share the
event loop with some C-based library and being unable to do so.


If you want to share it, you of course cannot do that unless that c-based
library uses the same version of the code.

But this is in no way difefrent to libevent: if one program links it
statically and other shared, or if one program uses 1.3d and another 1.3e
(different sonames), then you can't share it either.


The former is of course the fault of the linking program. The latter is
fixed (recently); libevent is now versioning properly.


There's definitely nothing you can't do with a void*, so this is all a
question of efficiency.


Its also a question of coding overhead, which is probably more important than
efficiency in this case.

Especially as efficieny is a red herring. in the libev proposed api,
callbacks get the watcher object passed, which

a) is often what you want anyways
b) is allocated by user code, so you cna just make the struct larger
   by appending your data.

this solves the efficiency issue and also makes for a simpler API.

In fact, I pondered long on wether I even add a void *arg to each event
watcher, but I thought the overhead of havign to "subclass" is large in the
common case where a single pointer indeed is enough.

In any case, let me repeat that this is an optional feature, not a
limitation, in the API.


It is an invitation for abuse (losing compatibility).


I assert that the cost of a sizeof(void*) to
point to the relevant part of your structure (which can be
nearby...still reasonable cache locality) is not too high.


Its higher than necessary, and more complex than neccessary.


Higher than necessary? I don't care; it's worth it.

More complex than necessary? Not really.


All callbacks will be called with EV_ERROR when an error occurs. And yes,
if you don't do error handlign and endlessly retry the same operation in a
loop, you run into problems.

But as that is an obvious programming bug, I don't see any problem here.

Hmm. Let me introduce a use case: an event-driven program which must not
fail. init or similar.

I worked on such a program recently. If it were unreliable, you would
have to send the system back to the factory for repair (i.e., flashing
new software). On ENOMEM, it would basically sleep and retry. This was
quite successful (memory could be temporarily consumed by network
buffers, etc, which cleared itself up after a while).


This could be solved easier with having a generic callback thats called in
such special conditions.

The alternative to have every event_add etc. call surrounded by checking
code is not realistic, especially if some of the code is not easily
changable (such as the libevent code itself which doesn't check for such
errors!).


For this program, it's important to know more than that an error has
occurred. EV_ERROR is totally inadequate.


You already know more: you know which watcher caused it, or which watcher is
responsible.


Erm, no. In the ENOMEM case, I know which fd you decided to kill, though
not actually that you've killed it. ENOMEM is not a per-file descriptor
problem.


What is my program supposed to do? It can't distinguish them, and the
correct behavior in each of these conditions is totally different. Also,
in the program I'm thinking of, "libev chose to kill this file
descriptor" probably means a network link just went down. Ergh.


killing the file descriptor might indeed be harsh. all that libev does now is
stop the event watcher and signal an error.


That's better...in the sense that "we've stopped murdering people; now
we only beat them severely" is better.


Besides, if you cannot malloc the few bytes ev_once requires you need a
*lot* of good error handlign code to continue sensibly.

Yes, as I've mentioned above, there are programs for which this level of
error handling is necessary.


I agree, but libevent already doesn't provide this, so its moot to compare
this case to libevent. However, it would be very interetsing to actualyl talk
about an API that solves this problem.

For example, on ENOMEM, one could have a callbac

Re: Heap-based timer implementation (was Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent)

2007-11-05 Thread Marc Lehmann
> also, it seems libevent is exporting all those heap functions (they are
> all external symbols). it probably shouldn't. or maybe there is some macro
> trick that fixes this, but at leats in my .so all min_ functions are
> exported.

easier, they are declared static, and I need to study my nm output closer,
ignore me, sorry (but I caught it in time :)

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


Re: Heap-based timer implementation (was Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent)

2007-11-05 Thread Marc Lehmann
On Mon, Nov 05, 2007 at 01:46:19PM -0500, Nick Mathewson <[EMAIL PROTECTED]> 
wrote:
> In case anybody here isn't watching the commits list [1], Niels
> applied a patch from Maxim Yegorushkin to make the timeout
> implementation based on a heap, rather than a RB-tree.  Thanks, Maxim!

also, it seems libevent is exporting all those heap functions (they are
all external symbols). it probably shouldn't. or maybe there is some macro
trick that fixes this, but at leats in my .so all min_ functions are
exported.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 09:37:52PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> > There is actually a lot of code that relies on this just working, and the
> > only other event loop I know that has a problem with this is Tk.
> 
> Ugh, I'd argue that idiom is broken. But if the support's free, I guess
> it doesn't matter.

Well, the idiom is how it works in most languages nowadays, so wether broken
or not its important to support.

The main problem is that with the current libevent implementation, failures
are completely silent.

> >>>   as is required for integration of other event-based libraries, without
> >>>   having to force the use of some construct around event_loop.
> > 
> > I don't udnerstand that, there is no construct around event_loop, its 
> > handled
> > completely seperately.
> 
> My question was in response to your "I added idle watchers, pid watchers

Ah, now I get it :)

Well, it requires a construct around event_loop because there is no way to
hook into event_loop. Of course, if you have those watcher type you don't
need anything around your loop call. Sorry if that was worded confusingly.

> > This is true, but its an optional feature you don't have to use.  In case
> > you wonder, EV, the perl interface to libev, uses this feature.
> > 
> > It makes most sense when embedding, of course (not all the world is an .so
> > :).
> 
> Hmm, in your Perl example, I wouldn't rule out you wanting to share the
> event loop with some C-based library and being unable to do so.

If you want to share it, you of course cannot do that unless that c-based
library uses the same version of the code.

But this is in no way difefrent to libevent: if one program links it
statically and other shared, or if one program uses 1.3d and another 1.3e
(different sonames), then you can't share it either.

> There's definitely nothing you can't do with a void*, so this is all a
> question of efficiency.

Its also a question of coding overhead, which is probably more important than
efficiency in this case.

Especially as efficieny is a red herring. in the libev proposed api,
callbacks get the watcher object passed, which

a) is often what you want anyways
b) is allocated by user code, so you cna just make the struct larger
   by appending your data.

this solves the efficiency issue and also makes for a simpler API.

In fact, I pondered long on wether I even add a void *arg to each event
watcher, but I thought the overhead of havign to "subclass" is large in the
common case where a single pointer indeed is enough.

In any case, let me repeat that this is an optional feature, not a
limitation, in the API.

> I assert that the cost of a sizeof(void*) to
> point to the relevant part of your structure (which can be
> nearby...still reasonable cache locality) is not too high.

Its higher than necessary, and more complex than neccessary.

> > All callbacks will be called with EV_ERROR when an error occurs. And yes,
> > if you don't do error handlign and endlessly retry the same operation in a
> > loop, you run into problems.
> > 
> > But as that is an obvious programming bug, I don't see any problem here.
> 
> Hmm. Let me introduce a use case: an event-driven program which must not
> fail. init or similar.
> 
> I worked on such a program recently. If it were unreliable, you would
> have to send the system back to the factory for repair (i.e., flashing
> new software). On ENOMEM, it would basically sleep and retry. This was
> quite successful (memory could be temporarily consumed by network
> buffers, etc, which cleared itself up after a while).

This could be solved easier with having a generic callback thats called in
such special conditions.

The alternative to have every event_add etc. call surrounded by checking
code is not realistic, especially if some of the code is not easily
changable (such as the libevent code itself which doesn't check for such
errors!).

> For this program, it's important to know more than that an error has
> occurred. EV_ERROR is totally inadequate.

You already know more: you know which watcher caused it, or which watcher is
responsible.

> What is my program supposed to do? It can't distinguish them, and the
> correct behavior in each of these conditions is totally different. Also,
> in the program I'm thinking of, "libev chose to kill this file
> descriptor" probably means a network link just went down. Ergh.

killing the file descriptor might indeed be harsh. all that libev does now is
stop the event watcher and signal an error.

> > Besides, if you cannot malloc the few bytes ev_once requires you need a
> > *lot* of good error handlign code to continue sensibly.
> 
> Yes, as I've mentioned above, there are programs for which this level of
> error handling is necessary.

I agree, but libevent already doesn't provide this, so its moot to compare
this case to libevent. However, it would be very interetsing to actualyl talk
about an API that solves this problem.

For example, on ENOMEM, one coul

Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Marc Lehmann
On Mon, Nov 05, 2007 at 09:50:54AM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> >leaves me unclear on what happened and no reasonable way to recover (e.g.
> >in the case of EBADF in select).
> 
> No, as I said before, libevent retains errno. I've attached a 
> demonstration program. If it doesn't return 0 when run against your 
> libevent emulation layer, your code is broken.

Libev actually pinpoints the faulty fd and tells the callback, which
belongs to the part of the code that actually created the event watcher in
the first place.

libevent doesn't allow you to do that. so even when errno tells you ebadf,
you can do nothing about it, because you cnanot identify the event watcher
responsible for it.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 09:47:59PM -0800, Christopher Layne <[EMAIL PROTECTED]> 
wrote:
> > What is my program supposed to do? It can't distinguish them, and the
> > correct behavior in each of these conditions is totally different. Also,
> > in the program I'm thinking of, "libev chose to kill this file
> > descriptor" probably means a network link just went down. Ergh.
> 
> Great point. It should back away and leave things alone - notifying
> to the caller (or whoever else is listening) "this is not acceptable to
> me - I suggest you fix it - because I won't" (aka unix way).

The "caller" in this case is the callback, however, because its the
continuation of whatever code requested to watch e.g. a bad fd in the first
place.

Also, there is no way around an error status for the callback, one simply
*must* provide a sensible status to the callback when something goes
wrong, because that might be a long time after the watcher was added.

So instead of notifying both the caller of the start funktion and the
callback later, I don't see why notifying the callback in all cases would be
worse, in fact, it lets you have errors easier.

And yes, if you don't check for errors in your callback you are doomed.
Returning errors to the caller of the event only requires additional checking
code, and I have yet to see any code that actively checks for problems while
calling event_once or event_add.

But most callbacks do the sensible thing when called with EV_ERROR even when
they don't care, because in the case of I/O errors they will try to read or
write and see it doesn't work.

> > No, on error your library may have muddied the water already by screwing
> > with the file descriptors. libevent also makes errors clearer by simply
> > returning error from the failed function (I'm thinking of event_once()
> > vs ev_once() here.)

Yes, closing is not a good idea, reporting an error and removing the event
form the fd set is enough.

However, I still maintain concentrating error handlign in the one place
where its likely to be present already as opposed to reporting errors to
places where nobody cares (show me the code that catches errors after
making event_(once|del|add) calls) is the right thing to do.

Wether one reports more detailed errors is then another question. And
might be solved as easily as giving errno a defined value to use.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: Heap-based timer implementation (was Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent)

2007-11-05 Thread Scott Lamb

Nick Mathewson wrote:

In case anybody here isn't watching the commits list [1], Niels
applied a patch from Maxim Yegorushkin to make the timeout
implementation based on a heap, rather than a RB-tree.  Thanks, Maxim!

This will probably need some testing; if anybody wants to play around
with it, just check out the SVN trunk [2] and send in any bugs you run
into, or any improvements that we should make.


Very nice. I hope to see any other internal improvements from libev make 
it into the libevent codebase one at a time in patch form.


And then, perhaps, API changes considered? I have to admit, I have a few 
complaints with the libevent API myself. Maybe all API changes should be 
discussed and you/Niels can determine if it's worth finding a way to 
make a (hopefully backwards compatible) transition?


Best regards,
Scott
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Heap-based timer implementation (was Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent)

2007-11-05 Thread Nick Mathewson
On Sun, Nov 04, 2007 at 05:19:36PM -0800, Christopher Layne wrote:
> On Sun, Nov 04, 2007 at 05:04:25PM -0800, Scott Lamb wrote:
> > > * timers are managed by a priority queue (O(1) for important operations
> > >   as opposed to O(log n) in libevent, also resulting in much simpler 
> > > code).
> > 
> > In terms of concrete data types, you appear to have used a binary heap?
> > So by "important operations" you mean removal, correct? Insertion is
> > still O(log n)? The asymptotic behavior is no different, then, as
> > insertion happens at least as often as removal.
> 
> Not to put on my O-face, but binary heap insert is *average*
> O(1). There should be a performance win for libevent, when it comes
> to timer checking, as using a heap will also be O(1) for heap_min()
> - which will benefit pending timer calculation code. However, early
> delete w/ pending timer will need some rigging or tombstone games. I
> wouldn't be surprised that, in a case where one is consistently
> resetting timers (think read/write before x time passes) and
> re-adding said event, that in the end it will be the same amount of
> CPU time.

In case anybody here isn't watching the commits list [1], Niels
applied a patch from Maxim Yegorushkin to make the timeout
implementation based on a heap, rather than a RB-tree.  Thanks, Maxim!

This will probably need some testing; if anybody wants to play around
with it, just check out the SVN trunk [2] and send in any bugs you run
into, or any improvements that we should make.

[1] The list is [EMAIL PROTECTED] ; go to
  https://lists.sourceforge.net/lists/listinfo/levent-commits
if you want to subscribe.

[2] The subversion repository is at
  https://levent.svn.sourceforge.net/svnroot/levent
To check out trunk, make sure you have subversion installed, and type
 svn co https://levent.svn.sourceforge.net/svnroot/levent/trunk 
libevent-trunk

yrs,
-- 
Nick Mathewson


pgph6MPLjfKJi.pgp
Description: PGP signature
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Scott Lamb

Marc Lehmann wrote:

On Sun, Nov 04, 2007 at 09:56:44PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:

returning from event_loop, leaving the app unclear on what has happened
and what to do.

In any case, you can get the same behaviour as libevent by calling unloop
in case of an error, so the interface is strictly more powerful.

Another reason this is undesirable: to get the libevent behavior, I'd
have to have this logic in *every callback* to get the same behavior.


Hmm, one could actually roll this into the libevent compatibility
layer. I'll think about this (didn't occur to me before).

Note, however, that I think the libevent interface is undesirable, as it
leaves me unclear on what happened and no reasonable way to recover (e.g.
in the case of EBADF in select).


No, as I said before, libevent retains errno. I've attached a 
demonstration program. If it doesn't return 0 when run against your 
libevent emulation layer, your code is broken.



Also, with libevent, you would need similar logic around each call to
event_add/del and so on, because those can all fail in libevent, so its just
a different place, really.


It's not just a different place - it's where the failure happened.



(And I do think I will provide an oom-error handler for this specific
case, as it is about the only generic error that isn't specific to some
watcher).

Thanks for these ideas,



#include 
#include 
#include 
#include 
#include 
#include 

const int BADFD = 64;

static void
cb(int fd, short events, void *cbarg)
{
}

int
main(int argc, char **argv)
{
struct event ev;
int sts;
const char *method;

/* Initialize, hopefully with select method */
putenv("EVENT_NOKQUEUE=yes");
putenv("EVENT_NODEVPOLL=yes");
putenv("EVENT_NOPOLL=yes");
putenv("EVENT_NOEPOLL=yes");
putenv("EVENT_NORTSIG=yes");
putenv("EVENT_NODEVPORT=yes");
event_init();
method = event_get_method();
if (strcmp(method, "select") != 0)
return EXIT_FAILURE;

/* add a bad descriptor */
close(BADFD); /* just to be sure */
event_set(&ev, BADFD, EV_READ|EV_PERSIST, cb, NULL);
sts = event_add(&ev, NULL);
if (sts != 0)
return EXIT_FAILURE;

/* try dispatching */
sts = event_dispatch();
if (sts != 0) {
perror("event_dispatch");
return (errno == EBADF) ? EXIT_SUCCESS : EXIT_FAILURE;
}

return EXIT_FAILURE;
}
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 10:04:05PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> Realistically, I think unit tests and bug reports/workarounds are about
> the only reason to blacklist specific event dispatchers. select() sucks,
> but well, that's why it's at the bottom of the list, right?

There is a hardcoded "priority" list that only incidentally has the same
order as the constants, but yes, select comes last, even if its usually
faster than poll :)

> If you are really concerned about syscall overhead (you mentioned it in
> another context, though I don't think it should be a major concern), you
> might want to prefer poll() to epoll() or kqueue() if you're going to

Yes, I thought about dynamically using select for a small (~50 or so) fd sets
as select is faster than epoll in many common small scenarios, but that is
mostly an epoll vs. poll issue.

For kqueue, I can't quite see the overhead, as kqueue has the same number
of syscalls per iteration as select/poll (namely one). the sysclal is more
complicated, but is likely faster than poll in all cases (I have zero
benchmark data on that, maybe the bsd people fucked it up, but likely,
they didn't :)

> But that's a situation where you'd want a little more coarseness: low
> startup overhead methods vs. low per-socket overhead ones. Resorting to
> naming specific event mechanisms seems undesirable in terms of older
> programs taking advantage of newer event facilities.

It is meant strictly as a benchmark, bug workaround, tuning mechanism. Less
being able to programmatically decide which is best but more offering the
user a mechanism to influence based on e.g. a config file, so one can
configure your mythical big app to use select because it performs better in
practise.

Choice isn't evil, as long as there is an obvious default choice if you
don't care (which is 0/EV_METHOD_AUTO).

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-05 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 09:56:44PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> > returning from event_loop, leaving the app unclear on what has happened
> > and what to do.
> > 
> > In any case, you can get the same behaviour as libevent by calling unloop
> > in case of an error, so the interface is strictly more powerful.
> 
> Another reason this is undesirable: to get the libevent behavior, I'd
> have to have this logic in *every callback* to get the same behavior.

Hmm, one could actually roll this into the libevent compatibility
layer. I'll think about this (didn't occur to me before).

Note, however, that I think the libevent interface is undesirable, as it
leaves me unclear on what happened and no reasonable way to recover (e.g.
in the case of EBADF in select).

Also, with libevent, you would need similar logic around each call to
event_add/del and so on, because those can all fail in libevent, so its just
a different place, really.

(And I do think I will provide an oom-error handler for this specific
case, as it is about the only generic error that isn't specific to some
watcher).

Thanks for these ideas,

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Scott Lamb
Scott Lamb wrote:
>>> * What's your use case for ev_loop_new() and ev_loop_default()'s bitmask
>>> of allowed implementations?
>> libevents unconditional use of getenv raised concerns with me and
>> apperently some users on this list, too, so this is one way to disable
>> this (EVMETHOD_ANY instead of EVMETHOD_ALL). Also, I am sure some apps
>> want control over the allowed event loops, e.g. to rule out select becasue
>> it is known to be not wrokign for them.
> 
> Ahh, I'd have to agree that getenv() seems sketchy. But with the
> interface you've supplied, you can't simply blacklist select() without

Hmm. I was going to check this out more and forgot before sending the email.

Realistically, I think unit tests and bug reports/workarounds are about
the only reason to blacklist specific event dispatchers. select() sucks,
but well, that's why it's at the bottom of the list, right?

If you are really concerned about syscall overhead (you mentioned it in
another context, though I don't think it should be a major concern), you
might want to prefer poll() to epoll() or kqueue() if you're going to
use a loop a couple times on a small number of fds then throw it away.
But that's a situation where you'd want a little more coarseness: low
startup overhead methods vs. low per-socket overhead ones. Resorting to
naming specific event mechanisms seems undesirable in terms of older
programs taking advantage of newer event facilities.
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Scott Lamb
Marc Lehmann wrote:
>> That seems rather undesirable for many applications.
> 
> Well, its arguably better than libevents behaviour, which is simply
> returning from event_loop, leaving the app unclear on what has happened
> and what to do.
> 
> In any case, you can get the same behaviour as libevent by calling unloop
> in case of an error, so the interface is strictly more powerful.

Another reason this is undesirable: to get the libevent behavior, I'd
have to have this logic in *every callback* to get the same behavior.

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Scott Lamb
Marc Lehmann wrote:
> On Sun, Nov 04, 2007 at 04:09:08PM -0800, Scott Lamb <[EMAIL PROTECTED]> 
> wrote:
>> Have you seen the new Linux timerfd API? Where available, you can wait
>> for CLOCK_MONOTONIC and CLOCK_REALTIME events independently. Beats
>> heuristics,
>
> How? I still need to detect time jumps. If my ev_periodic is to be scheduled
> every minute, on the minute, and somebody resets the time the timer needs to
> be rescheduled. With timerfd I would need to detetc that and remove/insert
> the timer again.

My impression was you could create such a timer like this:

struct itimerspec it = {
.it_value= { .tv_sec = 0,  .tv_nsec = 0 },
.it_interval = { .tv_sec = 60, .tv_nsec = 0 },
};

sts = timerfd(my_timerfd, CLOCK_REALTIME, TFD_TIMER_ABSTIME, &it);

and any adjustments are the kernel's responsibility. It is aware of the
adjtime() calls and such as they happen.

I have not tested this myself.

> (You might have no use for periodics for timeouts, but they are designed
> to solve this very problem :)
> 
> Besides, having a syscall per timer (or even timer change) would be an
> enourmous overhead for many workloads.

Really? Syscalls aren't really *that* expensive in and of themselves,
and it'd be a rare application that makes an order of magnitude more
timerfd() calls than read()/write() calls.

> 
>> and detecting time jumps sound like introducing a lot of extra
>> timeouts.
> 
> I don't quite see how that would happen with either timer event currently in
> libev, unless the user code forces it.
> 
>> I'd hate to see libev(ent)? show up on PowerTOP after just getting rid
>> of the 5-second timeout.
> 
> Now idea what powertop is, but sporiadic servers might use a lot of cpu
> without the kernel ever realising it :)

PowerTOP is a Linux tool for seeing the top causes of timeouts. Recently
people are trying to make better use of processor idle states, so it's
important to not fire timeouts unnecessarily.
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Christopher Layne
On Sun, Nov 04, 2007 at 09:37:52PM -0800, Scott Lamb wrote:
> For this program, it's important to know more than that an error has
> occurred. EV_ERROR is totally inadequate. You're using it for several
> different cases. I spotted at least these three:
> 
> * malloc() failed in ev_once - transient runtime error.
> * select() failed with ENOMEM, so libev chose to kill this file
> descriptor and now is notifying userspace.
> * bad file descriptor - probably a logic error.
> 
> What is my program supposed to do? It can't distinguish them, and the
> correct behavior in each of these conditions is totally different. Also,
> in the program I'm thinking of, "libev chose to kill this file
> descriptor" probably means a network link just went down. Ergh.

Great point. It should back away and leave things alone - notifying
to the caller (or whoever else is listening) "this is not acceptable to
me - I suggest you fix it - because I won't" (aka unix way).

> No, on error your library may have muddied the water already by screwing
> with the file descriptors. libevent also makes errors clearer by simply
> returning error from the failed function (I'm thinking of event_once()
> vs ev_once() here.)

Agreed. A library should touch as little as necessary to get the job done.

-cl
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Scott Lamb
Marc Lehmann wrote:
> On Sun, Nov 04, 2007 at 05:04:25PM -0800, Scott Lamb <[EMAIL PROTECTED]> 
> wrote:
>>> * multiple watchers can wait for the same event, there is no limitation
>>>   to one or two watchers for signals and io.
>> Could you give me an example of where that is important?
> 
> Mostly in environments using some form garbage collection. For example,
> this idiom is common in Perl:
> 
>$event = EV::io $fd, ...
> 
> If $event happens to contain an old watcher and $fd happens to refer to
> the same fd as that old watcher, this will lead into probelms as the
> watchers are both alive for a short time.
> 
> There is actually a lot of code that relies on this just working, and the
> only other event loop I know that has a problem with this is Tk.

Ugh, I'd argue that idiom is broken. But if the support's free, I guess
it doesn't matter.

>>> * there are two types of timers, based on real time differences and wall
>>>   clock time (cron-like). timers can also be repeating and be reset at
>>>   almost no cost (for idle timeouts used by many network servers). time 
>>> jumps
>>>   get detected reliably in both directions with or without a monotonic 
>>> clock.
>> (See my other mail about Linux's new timerfd facility.)
> 
> (timerfd unfortunately makes little sense for this, as it adds overhead
> but I can't see the compelling advantage, as one will still run into the
> same time jump problems with periodic timers).
> 
>> Nice; repeating and absolute timers have come up several times before, too.
> 
> This was something I always missed in event loops. That is, some event
> loops have one timer type, some the other, but never both.
> 
>>> * timers are managed by a priority queue (O(1) for important operations
>>>   as opposed to O(log n) in libevent, also resulting in much simpler code).
>> In terms of concrete data types, you appear to have used a binary heap?
>> So by "important operations" you mean removal, correct?
> 
> removal: O(log n)
> insertion: O(log n)
> find next: O(1)
> 
>> still O(log n)? The asymptotic behavior is no different, then, as
>> insertion happens at least as often as removal.
> 
> Yes, but:
> 
> a) finding the next timwer is a constant time issue
> b) a red-black tree is more than three times as slow
> 
> (see the updated benchmark at http://libev.schmorp.de/bench.html,
> especially the difference between the first (no timers) and the second
> examples (timers in use))

Ahh, very nice benchmarks.

> 
>>> * I added idle watchers, pid watchers and hook watchers into the event loop,
>>>   as is required for integration of other event-based libraries, without
>>>   having to force the use of some construct around event_loop.
>> Pardon my ignorance, but what are hook watchers?
> 
> if you want to plug-in other event-based libraries into the event loop you
> need to get to be able to hook into the event loop. this is what those
> watcher types provide.
> 
> the alternative would be to write your own event_loop with EV_NONBLOCK, but
> that isn't modular, that is, if you have two difefernt sofwtare modules
> having their own event_loop you *must* use, you lose. prepare/check watchers
> use this problem nicely.
> 
> A number of event loops have them, and they are useful for other things,
> such as transpoarently integrating coroutine packages etc.
> 
> Its not a killer feature, just very very useful in some cases.
> 
>> pid watchers I assume to be a fancy SIGCHLD handler?
> 
> Yes.
> 
>> That's a potentially useful feature, but why would it require a
>> construct around event_loop?
> 
> I don't udnerstand that, there is no construct around event_loop, its handled
> completely seperately.

My question was in response to your "I added idle watchers, pid watchers
and hook watchers into the event loop, as is required for integration of
other event-based libraries, without having to force the use of some
construct around event_loop."

> The reason is exists is allowing to share this potentially unsharable
> resource. For example, poll and select let you do "everything" (with fds),
> but you can of course only have one component per (single-thread) process
> using them, as they are blocking.
> 
> The same thing is true for signals: you can't share them with sigaction, as
> sigaction only allows one user.
> 
> And the same thing is true for sigchld.

Yes, I could see why sharing SIGCHLD would be useful. I was thinking of
this when asking above when you want to have multiple watchers for the
same event, but this was the only example I could think of off-hand, so
it seemed like two features to address the same use case.

>> A few notes:
>>
>> * what is the purpose of EV_COMMON?
> 
> Allowing customised event watchers. If you are concerned, treat it as a an
> internal symbol. Its use is documented here:
> http://cvs.schmorp.de/libev/README.embed
> 
>> From first glance, I'm concerned that it could not be used properly
>> unless libev.so and all callers are compiled with the same flags, which

Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread William Ahern
On Sun, Nov 04, 2007 at 07:56:36PM -0800, William Ahern wrote:
> On Mon, Nov 05, 2007 at 03:29:34AM +0100, Marc Lehmann wrote:
> > On Sun, Nov 04, 2007 at 06:00:56PM -0800, Christopher Layne <[EMAIL 
> > PROTECTED]> wrote:
> 
> > > Which isn't really that big a deal as similar time is spent in the 
> > > present RB
> > > implementation as it is.
> > 
> > No, I still maintain that the RB tree is slower because its rebalancing
> > operations are frequent and very complex. Heap code is trivial. Yes, they
> > have the same asymptotic growth behaviour, but the practical cases are
> > all very far away from infinity, and the hidden C in O(log n) is quite
> > important.
> > 
> 
> RB balancing isn't that complex. Maybe you're thinking of AVL trees?
> 
> The problem with using heaps in network software is you must be careful
> adversaries cannot dictate any of the parameters. Certainly when you're

Ignore this. I'm confusing heaps with hashes


___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread William Ahern
On Mon, Nov 05, 2007 at 03:29:34AM +0100, Marc Lehmann wrote:
> On Sun, Nov 04, 2007 at 06:00:56PM -0800, Christopher Layne <[EMAIL 
> PROTECTED]> wrote:

> > Which isn't really that big a deal as similar time is spent in the present 
> > RB
> > implementation as it is.
> 
> No, I still maintain that the RB tree is slower because its rebalancing
> operations are frequent and very complex. Heap code is trivial. Yes, they
> have the same asymptotic growth behaviour, but the practical cases are
> all very far away from infinity, and the hidden C in O(log n) is quite
> important.
> 

RB balancing isn't that complex. Maybe you're thinking of AVL trees?

The problem with using heaps in network software is you must be careful
adversaries cannot dictate any of the parameters. Certainly when you're
dealing with timers triggered by I/O latencies you've at least theoretically
exposed yourself to complexity attacks. (This is a guess based on intuition;
I haven't yet looked at your code.)

Nice thing about RB trees is you get a cap on worst case performance, smooth
cost spreading (i.e. no rehashing; not that it would be needed here), and
don't have to worry about pathological or malicious scenarios.

Sure you pay a small price. But for peace of mind its well worth it.

___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Christopher Layne
On Mon, Nov 05, 2007 at 02:46:36AM +0100, Marc Lehmann wrote:
> On Sun, Nov 04, 2007 at 04:09:08PM -0800, Scott Lamb <[EMAIL PROTECTED]> 
> wrote:
> > Have you seen the new Linux timerfd API? Where available, you can wait
> > for CLOCK_MONOTONIC and CLOCK_REALTIME events independently. Beats
> > heuristics,
>
> How? I still need to detect time jumps. If my ev_periodic is to be scheduled
> every minute, on the minute, and somebody resets the time the timer needs to
> be rescheduled. With timerfd I would need to detetc that and remove/insert
> the timer again.
> 
> (You might have no use for periodics for timeouts, but they are designed
> to solve this very problem :)

timerfd() has good and redundant points... as far as I can tell, it's an
inversion of user<>kernel code that results in the same goal.

http://lwn.net/Articles/245688/

-cl
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 06:00:56PM -0800, Christopher Layne <[EMAIL PROTECTED]> 
wrote:
> My point was that an event_del() on an event which has been called before it's
> timer has expired *or* an event_add() with a new timer will require 
> reheapifying
> atleast part of the timer heap.

Hmm, I do not see why this would ever be the case. Removing a timer that
hasn't expired yet might actually be much cheaper than removing the one at
the top element because it isn't at the root, so the n in the O(log n) is
potentially much smaller.

> Having an intrinsically sorted collection of elements and then altering
> a value within the middle of said collection before it has sifted to the
> top will require a reheap from that point on.

Not sure what you mean with "re-heap", but the opertaion you describe is the
same O(log n) operation as for removing elements elsewhere in the heap. And
given that you take the latets timer and insert it at that point, removing a
timer that hasn't expired is usually faster.

> Which isn't really that big a deal as similar time is spent in the present RB
> implementation as it is.

No, I still maintain that the RB tree is slower because its rebalancing
operations are frequent and very complex. Heap code is trivial. Yes, they
have the same asymptotic growth behaviour, but the practical cases are
all very far away from infinity, and the hidden C in O(log n) is quite
important.

(Again, please refer to the benchmark at
http://libev.schmorp.de/bench.html which directly contrasts behaviour
of libevent and libev with timers and no timers, especially look at the
difference in runtime when timers are being used).

> I'm all for a binary-heap rather than a RB-tree personally. I think the
> performance will benefit primarily for heap_min() (which is done before every
> re-entry into the event backend to reset the max-wait timer for 
> epoll/poll/select,
> etc).

I thought so, too, until recently but in fact the event loop is run pretty
rarely (except in benchmarks), and if you handle hundreds of clients
within each run (very typical of busy servers), then you can have hundreds
of timer updates, and these do show up in timing measurements.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Marc Lehmann
On Mon, Nov 05, 2007 at 02:42:16AM +0100, Marc Lehmann <[EMAIL PROTECTED]> 
wrote:
> However, libev has an interface (ev_timer_again) that incrementally
> updates the heap. Also, for repeating timers in general, there is no
> removal/insertion but only an incremental update.

Oh, and sorry for always forgetting stuff... the rationale for supporting
this operation is that I think its pretty important to support timers that
get reset all the time without usually firing (e.g. idle read timeouts on
a normally busy tcp connection).

The other rationale is that its trivial to implement with a heap, because you
already have all the code to do it:

  /* incremental timer update in ev_timer_again: */

  w->at = mn_now + w->repeat;
  downheap (timers, timercnt, w->active - 1);

  /* compare with timer removal: */

  timers [w->active - 1] = timers [--timercnt];
  downheap (timers, timercnt, w->active - 1);

In such a case (updating a timer) the event will simply wander down from
current place in the heap to its new place.

I am not sure wether this can be done with an rb-tree (likely), but I am
sure that I do not want to have to maintain the code that does that :)

(In any case, see the timer benchmark for a good comparison of heap vs.
red-black-tree).

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Marc Lehmann
On Mon, Nov 05, 2007 at 02:36:27AM +0100, Marc Lehmann <[EMAIL PROTECTED]> 
wrote:
> > > * multiple watchers can wait for the same event, there is no limitation
> > >   to one or two watchers for signals and io.
> > 
> > Could you give me an example of where that is important?
> 
> There is actually a lot of code that relies on this just working, and the
> only other event loop I know that has a problem with this is Tk.

I forgot to mention that the resulting code is likely a tiny bit faster, and
certainly way less complex, then the multiple-case logic employed in some
libevent backends:

   /* detecting wether evread or evwrite are wanted is not shown */
   if (evread != NULL && !(evread->ev_events & EV_PERSIST))
   event_del(evread);
   if (evwrite != NULL && evwrite != evread &&
   !(evwrite->ev_events & EV_PERSIST))
   event_del(evwrite);

this juggling of two slots for read/write didn't feel right. The code to
check which watchers want which events and schedule them in ev basically
looks like this:

  for (w = anfd->head; w; w = ((WL)w)->next)
if ((ev = w->events & events))
  event (EV_A_ (W)w, ev);

Also, some backends do reference counting in libevent, some don't, and I
don't like completely different semantics unless there is a good technical
reason for them (epoll cannot easily detect closed fds, for example, a big
problem, but at least its something that cnanot easily be improved upon).

The goal obviously wasn't to make this ultra-efficient (its a
singly-linked list), but to make it possible on a small scale without
mysteriously failing on some backends.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Christopher Layne
On Mon, Nov 05, 2007 at 02:42:16AM +0100, Marc Lehmann wrote:
> On Sun, Nov 04, 2007 at 05:19:36PM -0800, Christopher Layne <[EMAIL 
> PROTECTED]> wrote:
> > Not to put on my O-face, but binary heap insert is *average* O(1). There
> > should be a performance win for libevent, when it comes to timer checking,
> > as using a heap will also be O(1) for heap_min() - which will benefit 
> > pending
> > timer calculation code. However, early delete w/ pending timer will need 
> > some
> > rigging or tombstone games. I wouldn't be surprised that, in a case where 
> > one
> > is consistently resetting timers (think read/write before x time passes) and
> > re-adding said event, that in the end it will be the same amount of CPU 
> > time.
> 
> No, because a red-black tree is much more complex in management, so even if
> both operations are O(log n), the heap usually wins hands down.
> 
> Both insertion and removal are of the same complexity, on average, in a
> heap, of the data is random.
> 
> However, libev has an interface (ev_timer_again) that incrementally
> updates the heap. Also, for repeating timers in general, there is no
> removal/insertion but only an incremental update.

Right, which is not due to an inherent advantage of heap vs rbtree - but due
to our luck in time always going in one direction. I believe similar code was
present in libevent as it was. This in itself should be no different.

> Regarding pending events, this is solved in the same way for all events
> (not unlike how libevent does it): There is only one place where a pending
> event can be, and that is on its associated pending list.
> 
> When an event gets stopped, and is found pending, it will be removed form
> the pending queue by nulling out its pointer.

My point was that an event_del() on an event which has been called before it's
timer has expired *or* an event_add() with a new timer will require reheapifying
atleast part of the timer heap. Having an intrinsically sorted collection of
elements and then altering a value within the middle of said collection before
it has sifted to the top will require a reheap from that point on.

Which isn't really that big a deal as similar time is spent in the present RB
implementation as it is.

I'm all for a binary-heap rather than a RB-tree personally. I think the
performance will benefit primarily for heap_min() (which is done before every
re-entry into the event backend to reset the max-wait timer for 
epoll/poll/select,
etc).

-cl
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 04:09:08PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> Have you seen the new Linux timerfd API? Where available, you can wait
> for CLOCK_MONOTONIC and CLOCK_REALTIME events independently. Beats
> heuristics,
   
How? I still need to detect time jumps. If my ev_periodic is to be scheduled
every minute, on the minute, and somebody resets the time the timer needs to
be rescheduled. With timerfd I would need to detetc that and remove/insert
the timer again.

(You might have no use for periodics for timeouts, but they are designed
to solve this very problem :)

Besides, having a syscall per timer (or even timer change) would be an
enourmous overhead for many workloads.

> and detecting time jumps sound like introducing a lot of extra
> timeouts.

I don't quite see how that would happen with either timer event currently in
libev, unless the user code forces it.

> I'd hate to see libev(ent)? show up on PowerTOP after just getting rid
> of the 5-second timeout.

Now idea what powertop is, but sporiadic servers might use a lot of cpu
without the kernel ever realising it :)

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 05:19:36PM -0800, Christopher Layne <[EMAIL PROTECTED]> 
wrote:
> Not to put on my O-face, but binary heap insert is *average* O(1). There
> should be a performance win for libevent, when it comes to timer checking,
> as using a heap will also be O(1) for heap_min() - which will benefit pending
> timer calculation code. However, early delete w/ pending timer will need some
> rigging or tombstone games. I wouldn't be surprised that, in a case where one
> is consistently resetting timers (think read/write before x time passes) and
> re-adding said event, that in the end it will be the same amount of CPU time.

No, because a red-black tree is much more complex in management, so even if
both operations are O(log n), the heap usually wins hands down.

Both insertion and removal are of the same complexity, on average, in a
heap, of the data is random.

However, libev has an interface (ev_timer_again) that incrementally
updates the heap. Also, for repeating timers in general, there is no
removal/insertion but only an incremental update.

Regarding pending events, this is solved in the same way for all events
(not unlike how libevent does it): There is only one place where a pending
event can be, and that is on its associated pending list.

When an event gets stopped, and is found pending, it will be removed form
the pending queue by nulling out its pointer.

The heap insertion/removal is trivial.

(Most of the benchmark differences are, in fact, due to the heap vs.
rb-tree).

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Marc Lehmann
On Sun, Nov 04, 2007 at 05:04:25PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> > * multiple watchers can wait for the same event, there is no limitation
> >   to one or two watchers for signals and io.
> 
> Could you give me an example of where that is important?

Mostly in environments using some form garbage collection. For example,
this idiom is common in Perl:

   $event = EV::io $fd, ...

If $event happens to contain an old watcher and $fd happens to refer to
the same fd as that old watcher, this will lead into probelms as the
watchers are both alive for a short time.

There is actually a lot of code that relies on this just working, and the
only other event loop I know that has a problem with this is Tk.

> > * there are two types of timers, based on real time differences and wall
> >   clock time (cron-like). timers can also be repeating and be reset at
> >   almost no cost (for idle timeouts used by many network servers). time 
> > jumps
> >   get detected reliably in both directions with or without a monotonic 
> > clock.
> 
> (See my other mail about Linux's new timerfd facility.)

(timerfd unfortunately makes little sense for this, as it adds overhead
but I can't see the compelling advantage, as one will still run into the
same time jump problems with periodic timers).

> Nice; repeating and absolute timers have come up several times before, too.

This was something I always missed in event loops. That is, some event
loops have one timer type, some the other, but never both.

> > * timers are managed by a priority queue (O(1) for important operations
> >   as opposed to O(log n) in libevent, also resulting in much simpler code).
> 
> In terms of concrete data types, you appear to have used a binary heap?
> So by "important operations" you mean removal, correct?

removal: O(log n)
insertion: O(log n)
find next: O(1)

> still O(log n)? The asymptotic behavior is no different, then, as
> insertion happens at least as often as removal.

Yes, but:

a) finding the next timwer is a constant time issue
b) a red-black tree is more than three times as slow

(see the updated benchmark at http://libev.schmorp.de/bench.html,
especially the difference between the first (no timers) and the second
examples (timers in use))

> > * I added idle watchers, pid watchers and hook watchers into the event loop,
> >   as is required for integration of other event-based libraries, without
> >   having to force the use of some construct around event_loop.
> 
> Pardon my ignorance, but what are hook watchers?

if you want to plug-in other event-based libraries into the event loop you
need to get to be able to hook into the event loop. this is what those
watcher types provide.

the alternative would be to write your own event_loop with EV_NONBLOCK, but
that isn't modular, that is, if you have two difefernt sofwtare modules
having their own event_loop you *must* use, you lose. prepare/check watchers
use this problem nicely.

A number of event loops have them, and they are useful for other things,
such as transpoarently integrating coroutine packages etc.

Its not a killer feature, just very very useful in some cases.

> pid watchers I assume to be a fancy SIGCHLD handler?

Yes.

> That's a potentially useful feature, but why would it require a
> construct around event_loop?

I don't udnerstand that, there is no construct around event_loop, its handled
completely seperately.

The reason is exists is allowing to share this potentially unsharable
resource. For example, poll and select let you do "everything" (with fds),
but you can of course only have one component per (single-thread) process
using them, as they are blocking.

The same thing is true for signals: you can't share them with sigaction, as
sigaction only allows one user.

And the same thing is true for sigchld.

If your event loop provides support for it, you will less likely run into
a situation where two sofwtare packages in the same process need access to
it and stomp over each other.

> > * the backends use a much simpler design. unlike in libevent, the code to
> >   handle events is not duplicated for each backend, backends deal only
> >   with file descriptor events and a single timeout value, everything else
> >   is handled by the core, which also optimises state changes (the epoll
> >   backend is 100 lines in libev, as opposed to >350 lines in libevent,
> >   without suffering from its limitations).
> 
> Nice.

And while investigating the WIN32-Code/win32.c libevent backend, I found
out that its just a glorified variant of the select backend, except its
O(n) in registering and deregistering.

> > As for compatibility, the actual libev api is very different to the
> > libevent API (although the design is similar), but there is a emulation
> > layer with a corresponding event.h file that supports the event library
> > (but no evbuffer, evnds, evhttp etc.).
> 
> I think the API needs more hashing out. It is...different...but I'm not
> sure it's necessarily

Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Christopher Layne
On Sun, Nov 04, 2007 at 05:04:25PM -0800, Scott Lamb wrote:
> > * timers are managed by a priority queue (O(1) for important operations
> >   as opposed to O(log n) in libevent, also resulting in much simpler code).
> 
> In terms of concrete data types, you appear to have used a binary heap?
> So by "important operations" you mean removal, correct? Insertion is
> still O(log n)? The asymptotic behavior is no different, then, as
> insertion happens at least as often as removal.

Not to put on my O-face, but binary heap insert is *average* O(1). There
should be a performance win for libevent, when it comes to timer checking,
as using a heap will also be O(1) for heap_min() - which will benefit pending
timer calculation code. However, early delete w/ pending timer will need some
rigging or tombstone games. I wouldn't be surprised that, in a case where one
is consistently resetting timers (think read/write before x time passes) and
re-adding said event, that in the end it will be the same amount of CPU time.

-cl
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Scott Lamb
Marc Lehmann wrote:
> Hi!
> 
> On tuesday, I sent mail about various problems with libevent and its
> current API as well as implementation. Unfortunately, the mail has not yet
> shown up, but fortunately, it has been superseded by this one :)
> 
> In that mail, I announced that I will work on the problems I encountered
> in libevent (many of which have been reported and discusssed earlier on
> this list). After analyzing libevent I decided that it wasn't fixable
> except by rewriting the core parts of it (the inability to have multiple
> watchers for the same file descriptor event turned out to be blocking for
> my applications, otherwise I wouldn't have started the effort in the first
> place...).
> 
> The results look promising so far: I additionally implemented a libevent
> compatibility layer and benchmarked both libraries using the benchmark
> program provided by libevent: http://libev.schmorp.de/bench.html
> 
> Here is an incomplete list of what I changed and added (see the full
> list at http://cvs.schmorp.de/libev/README, or the cvs repository at
> http://cvs.schmorp.de/libev/):
> 
> fixed or improved:
> 
> * multiple watchers can wait for the same event, there is no limitation
>   to one or two watchers for signals and io.

Could you give me an example of where that is important?

> * there is full support for fork, you can continue to use the event loop
>   in the parent and child (or just one of them), even with quirky backends
>   such as epoll.

Nice; seems like that's come up on the list several times.

> * there are two types of timers, based on real time differences and wall
>   clock time (cron-like). timers can also be repeating and be reset at
>   almost no cost (for idle timeouts used by many network servers). time jumps
>   get detected reliably in both directions with or without a monotonic clock.

(See my other mail about Linux's new timerfd facility.)

Nice; repeating and absolute timers have come up several times before, too.

> * timers are managed by a priority queue (O(1) for important operations
>   as opposed to O(log n) in libevent, also resulting in much simpler code).

In terms of concrete data types, you appear to have used a binary heap?
So by "important operations" you mean removal, correct? Insertion is
still O(log n)? The asymptotic behavior is no different, then, as
insertion happens at least as often as removal.

> * event watchers can be added and removed at any time (in libevent,
>   removing events that are pending can lead to crashes).

(They don't lead to crashes, as someone mentioned.)

> * different types of events use different watchers, so you don't have
>   to use an i/o event watcher for timeouts, and you can reset timers
>   seperately from other types of watchers. Also, watchers are much smaller
>   (even the libevent emulation watcher only has about 2/3 of the size of a
>   libevent watcher).

Nice; separate types can be nice for documentation purposes if nothing else.

> 
> * I added idle watchers, pid watchers and hook watchers into the event loop,
>   as is required for integration of other event-based libraries, without
>   having to force the use of some construct around event_loop.

Pardon my ignorance, but what are hook watchers?

pid watchers I assume to be a fancy SIGCHLD handler? That's a
potentially useful feature, but why would it require a construct around
event_loop?

> * the backends use a much simpler design. unlike in libevent, the code to
>   handle events is not duplicated for each backend, backends deal only
>   with file descriptor events and a single timeout value, everything else
>   is handled by the core, which also optimises state changes (the epoll
>   backend is 100 lines in libev, as opposed to >350 lines in libevent,
>   without suffering from its limitations).

Nice.

> As for compatibility, the actual libev api is very different to the
> libevent API (although the design is similar), but there is a emulation
> layer with a corresponding event.h file that supports the event library
> (but no evbuffer, evnds, evhttp etc.).

I think the API needs more hashing out. It is...different...but I'm not
sure it's necessarily better, and I don't like change for change's sake.
A few notes:

* what is the purpose of EV_COMMON? From first glance, I'm concerned
that it could not be used properly unless libev.so and all callers are
compiled with the same flags, which seems impractical if the library
ever gains wide use.

* on ev_once failure, you're calling the callback with EV_ERROR? Yuck.
That's quite surprising behavior, and I could see it leading to stack
overflows as each ev_once tries to issue another one.

* What's your use case for ev_loop_new() and ev_loop_default()'s bitmask
of allowed implementations?

* (again, just skimming) you're closing fds automatically on ENOMEM?
Ergh. That seems rather undesirable for many applications.

Cheers,
Scott
___
Libevent-users mailing list
Libevent-

Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-04 Thread Scott Lamb
Marc Lehmann wrote:
>>> * there are two types of timers, based on real time differences and wall
>>>   clock time (cron-like). timers can also be repeating and be reset at
>>>   almost no cost (for idle timeouts used by many network servers). time 
>>> jumps
>>>   get detected reliably in both directions with or without a monotonic 
>>> clock.
>> But then they're not truly "real-time", no?
> 
> Within the limits of technology, they are:
> 
> - timers (based on monotonic time) will time out after "n" seconds (whatever
>   was configured), even if the date resets in between (libevent can do that
>   only for backward time jumps).
> 
> - periodicals will simply be rescheduled, if a periodic timer is scheduled
>   to fire "at" some point then it will not be affected by the time jump,
>   it will still fire at that point (its more complicated with periodic
>   timers scheduled to repeat, if you schedule a periodic timer to execute
>   on every minute than libev will try to schedule it to occur when time()
>   % 60 == 0, regardless of any time jumps.
> 
> Of course, detecting and correcting this cnanot be done completely
> reliable with sub-second precision (there is no API in posix to do that),
> but with a monotonic clock, libev should manage quite fine to detect even
> very small time jumps caused by ntpd.
> 
> (With no monotonic clock its a heuristic, obviously).

Have you seen the new Linux timerfd API? Where available, you can wait
for CLOCK_MONOTONIC and CLOCK_REALTIME events independently. Beats
heuristics, and detecting time jumps sound like introducing a lot of
extra timeouts. I'd hate to see libev(ent)? show up on PowerTOP after
just getting rid of the 5-second timeout.
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users


Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-03 Thread Marc Lehmann
On Sat, Nov 03, 2007 at 03:45:39PM -0700, William Ahern <[EMAIL PROTECTED]> 
wrote:
> A good itch, indeed.

I am currently working on integrating all modules from libevent, so it
becomes a full libevent replacement (and it already runs all of the
testsuite that doesn't require access to internals).

> > * there is full support for fork, you can continue to use the event loop
> >   in the parent and child (or just one of them), even with quirky backends
> >   such as epoll.
> 
> Curious how you managed to do this. Are you checking the process PID on each
> loop?

I considered that, but I think its too slow (one also needs to be careful
that watchers don't change e.g. epoll state until the getpid check is
done), or at leats I think I don't want that speed hit, no matter what.

Instead, I make it the users job to actually call me after a fork. I provide
three functions:

   void ev_fork_prepare (void);
   void ev_fork_parent (void);
   void ev_fork_child (void);

which you cna simply plug into pthread_atfork and it will work. The reason I
don't myself is that I don't want to require pthreads just for that, but the
perl interface for example does, so perl programs will be safe.

I wrote "full support for fork" even though its not automatic because it
can be done and is fully supported. With libevent, you can't free the
event base in general (program crashes with an assertion has has been
documented on this list a number of times).

> > * there are two types of timers, based on real time differences and wall
> >   clock time (cron-like). timers can also be repeating and be reset at
> >   almost no cost (for idle timeouts used by many network servers). time 
> > jumps
> >   get detected reliably in both directions with or without a monotonic 
> > clock.
> 
> But then they're not truly "real-time", no?

Within the limits of technology, they are:

- timers (based on monotonic time) will time out after "n" seconds (whatever
  was configured), even if the date resets in between (libevent can do that
  only for backward time jumps).

- periodicals will simply be rescheduled, if a periodic timer is scheduled
  to fire "at" some point then it will not be affected by the time jump,
  it will still fire at that point (its more complicated with periodic
  timers scheduled to repeat, if you schedule a periodic timer to execute
  on every minute than libev will try to schedule it to occur when time()
  % 60 == 0, regardless of any time jumps.

Of course, detecting and correcting this cnanot be done completely
reliable with sub-second precision (there is no API in posix to do that),
but with a monotonic clock, libev should manage quite fine to detect even
very small time jumps caused by ntpd.

(With no monotonic clock its a heuristic, obviously).

> > * event watchers can be added and removed at any time (in libevent,
> >   removing events that are pending can lead to crashes).
> 
> This is news to me. Can you give more detail, maybe with pointers to code?

That is how I read the code in event.c on first glance. But in fact, it seems
to be safe. I initially thought only removing the current watcher is safe.

(I was in fact fooled by some bugs in the testsuite). Sorry for the
confusion, I was too busy implementing all the other goodies, and right
now I am busy implementing the remaining parts to get 100% event API
support.

> > * different types of events use different watchers, so you don't have
> >   to use an i/o event watcher for timeouts, and you can reset timers
> >   seperately from other types of watchers. Also, watchers are much smaller
> >   (even the libevent emulation watcher only has about 2/3 of the size of a
> >   libevent watcher).
> 
> libevnet does this for I/O; timer is always set separately from read/write
> events. (Point being, its using libevent.)

Even libevent is somewhat fast if you don't combine timeouts and io
watchers in the same struct event. But it is of course quite the waste.

> > * I added idle watchers, pid watchers and hook watchers into the event loop,
> >   as is required for integration of other event-based libraries, without
> >   having to force the use of some construct around event_loop.
> 
> Needing to do an operation on every loop is arguably very rare, and there's
> not much burden in rolling your own. 

Its a quality issue. If you have a program that uses libevent and a
library that needs to hook into it, it simply cannot be done. I happen to
have many such cases.

It basiclaly happens as soon as you use libevent as some part of some big
program (such as in form of a perl module :), where many components might
want to hook into the event loop.

With that functionality in place, you can do it. Without it, you simply
fail.

It doesn't matter much, as libev is still faster than libevent even with
all those watcher types.

> PID watchers, likewise... how many spots in the code independently
> manage processes (as opposed to one unit which can just catch
> SIGCHLD).

You could say the

Re: [Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-03 Thread William Ahern
On Sat, Nov 03, 2007 at 09:15:07PM +0100, Marc Lehmann wrote:

> In that mail, I announced that I will work on the problems I encountered
> in libevent (many of which have been reported and discusssed earlier on
> this list). After analyzing libevent I decided that it wasn't fixable
> except by rewriting the core parts of it (the inability to have multiple
> watchers for the same file descriptor event turned out to be blocking for
> my applications, otherwise I wouldn't have started the effort in the first
> place...).

A good itch, indeed.

> The results look promising so far: I additionally implemented a libevent
> compatibility layer and benchmarked both libraries using the benchmark
> program provided by libevent: http://libev.schmorp.de/bench.html
> 
> Here is an incomplete list of what I changed and added (see the full
> list at http://cvs.schmorp.de/libev/README, or the cvs repository at
> http://cvs.schmorp.de/libev/):

Man. More pressure to rename my library from "libevnet" to something else ;)
 

> * there is full support for fork, you can continue to use the event loop
>   in the parent and child (or just one of them), even with quirky backends
>   such as epoll.

Curious how you managed to do this. Are you checking the process PID on each
loop?

> * there are two types of timers, based on real time differences and wall
>   clock time (cron-like). timers can also be repeating and be reset at
>   almost no cost (for idle timeouts used by many network servers). time jumps
>   get detected reliably in both directions with or without a monotonic clock.

But then they're not truly "real-time", no?


> * event watchers can be added and removed at any time (in libevent,
>   removing events that are pending can lead to crashes).

This is news to me. Can you give more detail, maybe with pointers to code?

> * different types of events use different watchers, so you don't have
>   to use an i/o event watcher for timeouts, and you can reset timers
>   seperately from other types of watchers. Also, watchers are much smaller
>   (even the libevent emulation watcher only has about 2/3 of the size of a
>   libevent watcher).

libevnet does this for I/O; timer is always set separately from read/write
events. (Point being, its using libevent.)

> * I added idle watchers, pid watchers and hook watchers into the event loop,
>   as is required for integration of other event-based libraries, without
>   having to force the use of some construct around event_loop.

Needing to do an operation on every loop is arguably very rare, and there's
not much burden in rolling your own. PID watchers, likewise... how many
spots in the code independently manage processes (as opposed to one unit
which can just catch SIGCHLD). Also, curious how/if you've considered Win32
environments.

> * the backends use a much simpler design. unlike in libevent, the code to
>   handle events is not duplicated for each backend, backends deal only
>   with file descriptor events and a single timeout value, everything else
>   is handled by the core, which also optimises state changes (the epoll
>   backend is 100 lines in libev, as opposed to >350 lines in libevent,
>   without suffering from its limitations).

libevnet optimizes state changes. Logically every I/O request is single-shot
(which is more forgiving to user code), but it actually sets EV_PERSIST and
delays libevent bookkeeping until the [libevnet bufio] callback returns. If
the user code submits another I/O op from its callback (highly likely) then
the event is left unchanged. It's still re-entrant safe because it can
detect further activity up the call chain using some stack message passing
bits (instead of reference counting because I also use mem pools, but I
digress). Again, point being this can be done using libevent as-is.

> As for compatibility, the actual libev api is very different to the
> libevent API (although the design is similar), but there is a emulation
> layer with a corresponding event.h file that supports the event library
> (but no evbuffer, evnds, evhttp etc.).

Well... if you can persuade me of the utility then this Christmas I might
want to investigate writing an evdns-like component. See the "lookup"
component of libevnet. There are lots of nice things I need in a DNS
resolver that evdns and others are incapable of handling. And I've also
written more HTTP, RTSP, and SOCKS5 parsers than I can remember.


> The "obvious" plan would be to take the evhttp etc. code from libevent and
> paste it in to libev, making libev a complete replacement for libevent
> with an optional new API. The catch is, I'd like to avoid this, because I
> am not prepared to maintain yet another library, and I am not keen on
> replicating the configure and portability work that went into libevent so
> far.

If you ask me, it would prove more fortuitous to re-write the DNS and HTTP
components then to replace libevent. Reason being because it would be hard
to substantively improve on DNS/HT

[Libevent-users] announcing libev, towards a faster and more featureful libevent

2007-11-03 Thread Marc Lehmann
Hi!

On tuesday, I sent mail about various problems with libevent and its
current API as well as implementation. Unfortunately, the mail has not yet
shown up, but fortunately, it has been superseded by this one :)

In that mail, I announced that I will work on the problems I encountered
in libevent (many of which have been reported and discusssed earlier on
this list). After analyzing libevent I decided that it wasn't fixable
except by rewriting the core parts of it (the inability to have multiple
watchers for the same file descriptor event turned out to be blocking for
my applications, otherwise I wouldn't have started the effort in the first
place...).

The results look promising so far: I additionally implemented a libevent
compatibility layer and benchmarked both libraries using the benchmark
program provided by libevent: http://libev.schmorp.de/bench.html

Here is an incomplete list of what I changed and added (see the full
list at http://cvs.schmorp.de/libev/README, or the cvs repository at
http://cvs.schmorp.de/libev/):

fixed or improved:

* multiple watchers can wait for the same event, there is no limitation
  to one or two watchers for signals and io.

* there is full support for fork, you can continue to use the event loop
  in the parent and child (or just one of them), even with quirky backends
  such as epoll.

* there are two types of timers, based on real time differences and wall
  clock time (cron-like). timers can also be repeating and be reset at
  almost no cost (for idle timeouts used by many network servers). time jumps
  get detected reliably in both directions with or without a monotonic clock.

* timers are managed by a priority queue (O(1) for important operations
  as opposed to O(log n) in libevent, also resulting in much simpler code).

* event watchers can be added and removed at any time (in libevent,
  removing events that are pending can lead to crashes).

* different types of events use different watchers, so you don't have
  to use an i/o event watcher for timeouts, and you can reset timers
  seperately from other types of watchers. Also, watchers are much smaller
  (even the libevent emulation watcher only has about 2/3 of the size of a
  libevent watcher).

* I added idle watchers, pid watchers and hook watchers into the event loop,
  as is required for integration of other event-based libraries, without
  having to force the use of some construct around event_loop.

* the backends use a much simpler design. unlike in libevent, the code to
  handle events is not duplicated for each backend, backends deal only
  with file descriptor events and a single timeout value, everything else
  is handled by the core, which also optimises state changes (the epoll
  backend is 100 lines in libev, as opposed to >350 lines in libevent,
  without suffering from its limitations).

As for compatibility, the actual libev api is very different to the
libevent API (although the design is similar), but there is a emulation
layer with a corresponding event.h file that supports the event library
(but no evbuffer, evnds, evhttp etc.).

It works very well: both the testsuite as well as the benchmark program from
libevent work as expected, and the evdns code compiles (and runs) without any
changes, making libev a drop-in replacement for libevent users. "Porting"
evbuffer and so on would be equally trivial.

However: libev has not yet been released on its own (only as part of
the EV perl module), meaning there is no configure script, C-level
documentation etc.

The "obvious" plan would be to take the evhttp etc. code from libevent and
paste it in to libev, making libev a complete replacement for libevent
with an optional new API. The catch is, I'd like to avoid this, because I
am not prepared to maintain yet another library, and I am not keen on
replicating the configure and portability work that went into libevent so
far.

So, would there be an interest in replacing the "core" event part of
libevent with the libev code? If yes, there are a number of issues to
solve, and here is how I would solve them:

* libev only supports select and epoll. Adding poll would be trivial for me,
  and adding dev/poll and kqueue support would be easy, except that I don't
  want to set-up some bsd machines just for that. I would, however, opt to
  write kqueue and /dev/poll backends "dry" and let somebody else do the
  actual porting stuff to the then-existing backends.

* libev only supports one "event base". The reason is that I think a
  leader/ follower pattern together with lazy I/O would beat any multiple
  event loop implementation, and the fact that each time I see some
  software module using its own copy of some mainloop meant that it
  doesn't work together with other such modules :) Still, if integration
  should be achieved, this is no insurmountable obstacle :->

* libev uses its own api, which potentially causes some global symbol
  spamming (lots of functions with ev_ prefix). This could be used a