Re: async network I/O, event channels, etc

2006-07-27 Thread Evgeniy Polyakov
On Thu, Jul 27, 2006 at 02:31:56AM -0700, David Miller ([EMAIL PROTECTED]) 
wrote:
> From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> Date: Thu, 27 Jul 2006 12:58:13 +0400
> 
> > Btw, according to DMA allocations - there are some problems here too.
> > Some pieces of the world can not dma behind 16mb, and someone can do it
> > over 4gb.
> 
> I think people take this "DMA" in Ulrich's interface names too
> literally.  It is logically something different, although it could be
> used directly for this purpose.
> 
> View it rather as memory you have by some key based ID, but need to
> explicitly map to access directly.

I mean here, that it is possible to have those Ulrich's dma regions to
be used as a real dma regions, and showed that it is not a good idea.

> > Those physical pages can be managed within kernel and userspace can map
> > them. But there is another possibility - replace slab allocation for
> > network devices with allocation from premapped pool.
> > That naturally allows to manage that pool for AIO needs and have
> > zero-copy sending and receiving support. That is what I talked in
> > netchannel topic when question about allocation/freeing cost in atomic
> > context arised. I work on that solution, which can be used both for
> > netchannels (and full userspace processing) and usual networking code.
> 
> Interesting idea, and yes I have been watching you stress test your
> AVL tree code :))

Tests are completed - actually it required 12 a4 papers filled with
small circles and numbers to prove it is correct, overnight run was just
for clarifications :)

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread David Miller
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Thu, 27 Jul 2006 12:58:13 +0400

> Btw, according to DMA allocations - there are some problems here too.
> Some pieces of the world can not dma behind 16mb, and someone can do it
> over 4gb.

I think people take this "DMA" in Ulrich's interface names too
literally.  It is logically something different, although it could be
used directly for this purpose.

View it rather as memory you have by some key based ID, but need to
explicitly map to access directly.

> Those physical pages can be managed within kernel and userspace can map
> them. But there is another possibility - replace slab allocation for
> network devices with allocation from premapped pool.
> That naturally allows to manage that pool for AIO needs and have
> zero-copy sending and receiving support. That is what I talked in
> netchannel topic when question about allocation/freeing cost in atomic
> context arised. I work on that solution, which can be used both for
> netchannels (and full userspace processing) and usual networking code.

Interesting idea, and yes I have been watching you stress test your
AVL tree code :))
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread Evgeniy Polyakov
On Thu, Jul 27, 2006 at 01:02:55AM -0700, David Miller ([EMAIL PROTECTED]) 
wrote:
> From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> Date: Thu, 27 Jul 2006 11:49:02 +0400
> 
> > I.e. map skb's data to userspace? Not a good idea especially with it's
> > tricky lifetime and unability for userspace to inform kernel when it
> > finished and skb can be freed (without additional syscall).
> 
> Hmmm...
> 
> If it is paged based, I do not see the problem.  Events and calls to
> AIO I/O routines make transfer of buffer ownership.  The fact that
> while kernel (and thus networking stack) "owns" the buffer for an AIO
> call, the user can have a valid mapping to it is a unimportant detail.
> 
> If the user will scramble a piece of data that is in flight to or from
> the network card, it is his problem.
> 
> If we are using a primitive network card that does not support
> scatter-gather I/O and thus not page based SKBs, we will make
> copies.  But this is transparent to the user.
> 
> The idea is that DMA mappings have page granularity.
> 
> At least on transmit it should work well.  Receive side is more
> difficult and initial implementation will need to copy.

And what if several skb->data are placed on the same page?
Or do we want to allocate at least one page for one skb? 
Even if it is an 40 bytes ack?

> > I did it with af_tlb zero-copy sniffer (but I substitute mapped pages
> > with physical skb->data pages), and it was not very good.
> 
> Trying to be too clever with skb->data has always been catastrophic. :)

Yep :)

> > Well, I think preallocate some buffers and use that in AIO setup is a
> > plus, since in that case user does not care about when it is possible to
> > reuse the same buffer - when appropriate kevent is completed, that means
> > that provided buffer is no longer in use by kernel, and user can reuse
> > it.
> 
> We now enter the most interesting topic of AIO buffer pool management
> and where it belongs. :-)  We are assuming up to this point that the
> user manages this stuff with explicit DMA calls for allocation, then
> passes the key based references to those buffers as arguments to the
> AIO I/O calls.
> 
> But I want to suggest another possibility.  What if the kernel managed
> the AIO buffer pool for a task?  It could grow this dynamically based
> upon need.  The only implementation road block is how large to we
> allow this to grow, but I think normal VM mechanisms can take care
> of it.
> 
> On transmit this is not straightforward, but for receive it has really
> nice possibilities. :)

Btw, according to DMA allocations - there are some problems here too.
Some pieces of the world can not dma behind 16mb, and someone can do it
over 4gb. If only 16mb are used, it is just 8k packets with 1500 MTU,
and actually userspace does not know which NIC receives it's data, so it
is impossible in advance to allocate some pool, which will be used for
dma transfer, so we just need to allocate physical pages and use them
with memcpy() from skb->data.

Those physical pages can be managed within kernel and userspace can map
them. But there is another possibility - replace slab allocation for
network devices with allocation from premapped pool.
That naturally allows to manage that pool for AIO needs and have
zero-copy sending and receiving support. That is what I talked in
netchannel topic when question about allocation/freeing cost in atomic
context arised. I work on that solution, which can be used both for
netchannels (and full userspace processing) and usual networking code.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread Jens Axboe
On Thu, Jul 27 2006, David Miller wrote:
> From: Jens Axboe <[EMAIL PROTECTED]>
> Date: Thu, 27 Jul 2006 10:29:24 +0200
> 
> > Precisely. And this is the bit that is currently still broken for
> > splice-to-socket, since it gives that ack right after ->sendpage() has
> > been called. But that's a known deficiency right now, I think Alexey is
> > currently looking at that (as well as receive side support).
> 
> That's right, I was discussing this with him just a few days ago.
> 
> It's good to hear that he's looking at those patches you were working
> on several months ago.

It is. I never ventured much into the networking part, just noted that
as a current limitation with the ->sendpage() based approach. Basically
we need to pass more info in, which also gets rid of the limitation of
passing a single page at the time.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread David Miller
From: Jens Axboe <[EMAIL PROTECTED]>
Date: Thu, 27 Jul 2006 10:29:24 +0200

> Precisely. And this is the bit that is currently still broken for
> splice-to-socket, since it gives that ack right after ->sendpage() has
> been called. But that's a known deficiency right now, I think Alexey is
> currently looking at that (as well as receive side support).

That's right, I was discussing this with him just a few days ago.

It's good to hear that he's looking at those patches you were working
on several months ago.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread Jens Axboe
On Thu, Jul 27 2006, David Miller wrote:
> From: Jens Axboe <[EMAIL PROTECTED]>
> Date: Thu, 27 Jul 2006 10:11:15 +0200
> 
> > Ownership transition from user -> kernel that is, what I'm trying to say
> > that returning ownership to the user again is the tricky part.
> 
> Yes, it is important that for TCP, for example, we don't give
> the user the event until the data is acknowledged and the skb's
> referencing that data are fully freed.
> 
> This is further complicated by the fact that packetization boundaries
> are going to be different from AIO buffer boundaries.
> 
> I think this is what you are alluding to.

Precisely. And this is the bit that is currently still broken for
splice-to-socket, since it gives that ack right after ->sendpage() has
been called. But that's a known deficiency right now, I think Alexey is
currently looking at that (as well as receive side support).

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread David Miller
From: Jens Axboe <[EMAIL PROTECTED]>
Date: Thu, 27 Jul 2006 10:11:15 +0200

> Ownership transition from user -> kernel that is, what I'm trying to say
> that returning ownership to the user again is the tricky part.

Yes, it is important that for TCP, for example, we don't give
the user the event until the data is acknowledged and the skb's
referencing that data are fully freed.

This is further complicated by the fact that packetization boundaries
are going to be different from AIO buffer boundaries.

I think this is what you are alluding to.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread Jens Axboe
On Thu, Jul 27 2006, Jens Axboe wrote:
> On Thu, Jul 27 2006, David Miller wrote:
> > From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> > Date: Thu, 27 Jul 2006 11:49:02 +0400
> > 
> > > I.e. map skb's data to userspace? Not a good idea especially with it's
> > > tricky lifetime and unability for userspace to inform kernel when it
> > > finished and skb can be freed (without additional syscall).
> > 
> > Hmmm...
> > 
> > If it is paged based, I do not see the problem.  Events and calls to
> > AIO I/O routines make transfer of buffer ownership.  The fact that
> > while kernel (and thus networking stack) "owns" the buffer for an AIO
> > call, the user can have a valid mapping to it is a unimportant detail.
> 
> Ownership may be clear, but "when can I reuse" is tricky. The same issue
> comes up for vmsplice -> splice to socket.

Ownership transition from user -> kernel that is, what I'm trying to say
that returning ownership to the user again is the tricky part.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread Jens Axboe
On Thu, Jul 27 2006, David Miller wrote:
> From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> Date: Thu, 27 Jul 2006 11:49:02 +0400
> 
> > I.e. map skb's data to userspace? Not a good idea especially with it's
> > tricky lifetime and unability for userspace to inform kernel when it
> > finished and skb can be freed (without additional syscall).
> 
> Hmmm...
> 
> If it is paged based, I do not see the problem.  Events and calls to
> AIO I/O routines make transfer of buffer ownership.  The fact that
> while kernel (and thus networking stack) "owns" the buffer for an AIO
> call, the user can have a valid mapping to it is a unimportant detail.

Ownership may be clear, but "when can I reuse" is tricky. The same issue
comes up for vmsplice -> splice to socket.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread David Miller
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Thu, 27 Jul 2006 11:49:02 +0400

> I.e. map skb's data to userspace? Not a good idea especially with it's
> tricky lifetime and unability for userspace to inform kernel when it
> finished and skb can be freed (without additional syscall).

Hmmm...

If it is paged based, I do not see the problem.  Events and calls to
AIO I/O routines make transfer of buffer ownership.  The fact that
while kernel (and thus networking stack) "owns" the buffer for an AIO
call, the user can have a valid mapping to it is a unimportant detail.

If the user will scramble a piece of data that is in flight to or from
the network card, it is his problem.

If we are using a primitive network card that does not support
scatter-gather I/O and thus not page based SKBs, we will make
copies.  But this is transparent to the user.

The idea is that DMA mappings have page granularity.

At least on transmit it should work well.  Receive side is more
difficult and initial implementation will need to copy.

> I did it with af_tlb zero-copy sniffer (but I substitute mapped pages
> with physical skb->data pages), and it was not very good.

Trying to be too clever with skb->data has always been catastrophic. :)

> Well, I think preallocate some buffers and use that in AIO setup is a
> plus, since in that case user does not care about when it is possible to
> reuse the same buffer - when appropriate kevent is completed, that means
> that provided buffer is no longer in use by kernel, and user can reuse
> it.

We now enter the most interesting topic of AIO buffer pool management
and where it belongs. :-)  We are assuming up to this point that the
user manages this stuff with explicit DMA calls for allocation, then
passes the key based references to those buffers as arguments to the
AIO I/O calls.

But I want to suggest another possibility.  What if the kernel managed
the AIO buffer pool for a task?  It could grow this dynamically based
upon need.  The only implementation road block is how large to we
allow this to grow, but I think normal VM mechanisms can take care
of it.

On transmit this is not straightforward, but for receive it has really
nice possibilities. :)

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-27 Thread Evgeniy Polyakov
On Wed, Jul 26, 2006 at 11:10:55PM -0700, David Miller ([EMAIL PROTECTED]) 
wrote:
> From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> Date: Wed, 26 Jul 2006 10:28:17 +0400
> 
> > I have not created additional DMA memory allocation methods, like
> > Ulrich described in his article, so I handle it inside NAIO which
> > has some overhead (I posted get_user_pages() sclability graph some
> > time ago).
> 
> I've been thinking about this aspect, and I think it's very
> interesting.  Let's be clear what the ramifications of this
> are first.
> 
> Using the terminology of Network Algorithmics, this is an
> instance of Principle 2, "Shift computation in time".
> 
> Instead of using get_user_pages() at AIO setup, we instead map the
> thing to userspace later when the user wants it.  Pinning pages is a
> pain because both user and kernel refer to the buffer at the same
> time.  We get more flexibility when the user has to map the thing
> explicitly.

I.e. map skb's data to userspace? Not a good idea especially with it's
tricky lifetime and unability for userspace to inform kernel when it
finished and skb can be freed (without additional syscall).
I did it with af_tlb zero-copy sniffer (but I substitute mapped pages
with physical skb->data pages), and it was not very good.

> I want us to think about how a user might want to use this.  What
> I anticipate is that users will want to organize a pool of AIO
> buffers for themselves using this DMA interface.  So the events
> they are truly interested in are of a finer granularity than you
> might expect.  They want to know when pieces of a buffer are
> available for reuse.

Ah, I see.
Well, I think preallocate some buffers and use that in AIO setup is a
plus, since in that case user does not care about when it is possible to
reuse the same buffer - when appropriate kevent is completed, that means
that provided buffer is no longer in use by kernel, and user can reuse
it.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-26 Thread David Miller
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Wed, 26 Jul 2006 10:28:17 +0400

> I have not created additional DMA memory allocation methods, like
> Ulrich described in his article, so I handle it inside NAIO which
> has some overhead (I posted get_user_pages() sclability graph some
> time ago).

I've been thinking about this aspect, and I think it's very
interesting.  Let's be clear what the ramifications of this
are first.

Using the terminology of Network Algorithmics, this is an
instance of Principle 2, "Shift computation in time".

Instead of using get_user_pages() at AIO setup, we instead map the
thing to userspace later when the user wants it.  Pinning pages is a
pain because both user and kernel refer to the buffer at the same
time.  We get more flexibility when the user has to map the thing
explicitly.

I want us to think about how a user might want to use this.  What
I anticipate is that users will want to organize a pool of AIO
buffers for themselves using this DMA interface.  So the events
they are truly interested in are of a finer granularity than you
might expect.  They want to know when pieces of a buffer are
available for reuse.

And here is the core dilemma.

If you make the event granularity too coarse, a larger AIO buffer
pool is necessary.  If you make the event granuliary too fine,
event processing begins to dominate, and costs too much.  This is
true even for something as light weight as kevent.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-25 Thread Evgeniy Polyakov
On Tue, Jul 25, 2006 at 03:01:22PM -0700, David Miller ([EMAIL PROTECTED]) 
wrote:
> From: Ulrich Drepper <[EMAIL PROTECTED]>
> Date: Tue, 25 Jul 2006 12:23:53 -0700
> 
> > I was very much surprised by the reactions I got after my OLS talk.
> > Lots of people declared interest and even agreed with the approach and
> > asked me to do further ahead with all this.  For those who missed it,
> > the paper and the slides are available on my home page:
> > 
> > http://people.redhat.com/drepper/
> > 
> > As for the next steps I see a number of possible ways.  The discussions
> > can be held on the usual mailing lists (i.e., lkml and netdev) but due
> > to the raw nature of the current proposal I would imagine that would be
> > mainly perceived as noise.
> 
> Since I gave a big thumbs up for Evgivny's kevent work yesterday
> on linux-kernel, you might want to start by comparing your work
> to his.  Because his has the advantage that 1) we have code now
> and 2) he has written many test applications and performed many
> benchmarks against his code which has flushed out most of the
> major implementation issues.
> 
> I think most of the people who have encouraged your work are unaware
> of Evgivny's kevent stuff, which is extremely unfortunate, the two
> works are more similar than they are different.
> 
> I do not think discussing all of this on netdev would be perceived
> as noise. :)

Hello David, Ulrich.

Here is brief description of what is kevent and how it works.

Kevent subsystem incorporates several AIO/kqueue design notes and ideas.
Kevent can be used both for edge and level notifications. It supports
socket notifications (accept, send, recv), network AIO (aio_send(),
aio_recv() and aio_sendfile()), inode notifications (create/remove),
generic poll()/select() notifications and timer notifications.

There are several object in the kevent system:
storage - each source of events (socket, inode, timer, aio, anything) has
structure kevent_storage incorporated into it, which is basically a list
of registered interests for this source of events.
user - it is abstraction which holds all requested kevents. It is
similar to FreeBSD's kqueue.
kevent - set of interests for given source of events or storage.

When kevent is queued into storage, it will live there until removed by
kevent_dequeue(). When some activity is noticed in given storage, it
scans it's kevent_storage->list for kevents which match activity event.
If kevents are found and they are not already in the
kevent_user->ready_list, they will be added there at the end.

ioctl(WAIT) (or appropriate syscall) will wait until either requested
number of kevents are ready or timeout elapsed or at least one kevent is
ready, it's behaviour depends on parameters.

It is possible to have one-shot kevents, which are automatically removed
when are ready.

Any event can be added/removed/modified by ioctl or special controlling
syscall.

Network AIO is based on kevent and works as usual kevent storage on top
of inode.
When new socket is created it is associated with that inode and when
some activity is detected appropriate notifications are generated and
kevent_naio_callback() is called.
When new kevent is being registered, network AIO ->enqueue() callback
simply marks itself like usual socket event watcher. It also locks
physical userspace pages in memory and stores appropriate pointers in
private kevent structure. I have not created additional DMA memory
allocation methods, like Ulrich described in his article, so I handle it
inside NAIO which has some overhead (I posted get_user_pages()
sclability graph some time ago).
Network AIO callback gets pointers to userspace pages and tries to copy
data from receiving skb queue into them using protocol specific
callback. This callback is very similar to ->recvmsg(), so they could
share a lot in future (as far as I recall it worked only with hardware
capable to do checksumming, I'm a bit lazy).

Both network and aio implementation work on top of hooks inside
appropriate state machines, but not as repeated call design (curect AIO) 
or special thread (SGI AIO). AIO work was stopped, since I was unable to 
achieve the same speed as synchronous read 
(maximum speeds were 2Gb/sec vs. 2.1 GB/sec for aio and sync IO accordingly
when reading data from the cache).
Network aio_sendfile() works lazily - it asynchronously populates pages
into the VFS cache (which can be used for various tricks with adaptive
readahead) and then uses usual ->sendfile() callback.

I have not created an interface for userspace events (like Solaris), 
since right now I do not see it's usefullness, but if there is
requirements for that it is quite easy with kevents.

I'm preparing set of kevent patches resend (with cleanups mentioned in
previous e-mails), which will be ready in a couple of moments.

1. kevent homepage.
http://tservice.net.ru/~s0mbre/old/?section=projects&item=kevent

2. network aio homepage.
http://tservice.net.ru/~s0mbre/

Re: async network I/O, event channels, etc

2006-07-25 Thread Nicholas Miell
On Tue, 2006-07-25 at 15:01 -0700, David Miller wrote:
> From: Ulrich Drepper <[EMAIL PROTECTED]>
> Date: Tue, 25 Jul 2006 12:23:53 -0700
> 
> > I was very much surprised by the reactions I got after my OLS talk.
> > Lots of people declared interest and even agreed with the approach and
> > asked me to do further ahead with all this.  For those who missed it,
> > the paper and the slides are available on my home page:
> > 
> > http://people.redhat.com/drepper/
> > 
> > As for the next steps I see a number of possible ways.  The discussions
> > can be held on the usual mailing lists (i.e., lkml and netdev) but due
> > to the raw nature of the current proposal I would imagine that would be
> > mainly perceived as noise.
> 
> Since I gave a big thumbs up for Evgivny's kevent work yesterday
> on linux-kernel, you might want to start by comparing your work
> to his.  Because his has the advantage that 1) we have code now
> and 2) he has written many test applications and performed many
> benchmarks against his code which has flushed out most of the
> major implementation issues.
> 
> I think most of the people who have encouraged your work are unaware
> of Evgivny's kevent stuff, which is extremely unfortunate, the two
> works are more similar than they are different.
> 
> I do not think discussing all of this on netdev would be perceived
> as noise. :)

While the comparing is going on, how does this compare to Solaris's
ports interface? It's documented at
http://docs.sun.com/app/docs/doc/816-5168/6mbb3hrir?a=view

Also, since we're on the subject, why a whole new interface for event
queuing instead of extending the existing io_getevents(2) and friends?

-- 
Nicholas Miell <[EMAIL PROTECTED]>

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: async network I/O, event channels, etc

2006-07-25 Thread David Miller
From: Ulrich Drepper <[EMAIL PROTECTED]>
Date: Tue, 25 Jul 2006 12:23:53 -0700

> I was very much surprised by the reactions I got after my OLS talk.
> Lots of people declared interest and even agreed with the approach and
> asked me to do further ahead with all this.  For those who missed it,
> the paper and the slides are available on my home page:
> 
> http://people.redhat.com/drepper/
> 
> As for the next steps I see a number of possible ways.  The discussions
> can be held on the usual mailing lists (i.e., lkml and netdev) but due
> to the raw nature of the current proposal I would imagine that would be
> mainly perceived as noise.

Since I gave a big thumbs up for Evgivny's kevent work yesterday
on linux-kernel, you might want to start by comparing your work
to his.  Because his has the advantage that 1) we have code now
and 2) he has written many test applications and performed many
benchmarks against his code which has flushed out most of the
major implementation issues.

I think most of the people who have encouraged your work are unaware
of Evgivny's kevent stuff, which is extremely unfortunate, the two
works are more similar than they are different.

I do not think discussing all of this on netdev would be perceived
as noise. :)


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html