On Mon, 7 May 2007, Ulrich Drepper wrote:
> On 5/7/07, Davide Libenzi <[EMAIL PROTECTED]> wrote:
> > read(2) is a cancellation point too. So if the fine userspace code issues
> > a random pthread_cancel() to a thread handling that, data is lost together
> > with the session that thread was handlin
On Mon, 7 May 2007, Ulrich Drepper wrote:
>
> This is absolutely not comparable. When read/write is canceled no
> data is lost. Some other thread might have to pick up the slack but
> that's it.
That's bullsh*t, Uli, and you know it.
Whatever the thread read() into it's buffer is effectively
On May 07, 2007, at 22:49:04, Ulrich Drepper wrote:
On 5/7/07, Davide Libenzi <[EMAIL PROTECTED]> wrote:
read(2) is a cancellation point too. So if the fine userspace code
issues a random pthread_cancel() to a thread handling that, data
is lost together with the session that thread was handli
Ulrich Drepper wrote:
> On 5/7/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
>> Anyway, we could extend epoll to be mmapable...
>
> Welcome to kevent, well, except with a lot more ballast and awkward
> interfaces.
So an mmapable epoll is equivalent to kevent.. great! Well, expect
without a whole ne
On 5/7/07, Davide Libenzi <[EMAIL PROTECTED]> wrote:
read(2) is a cancellation point too. So if the fine userspace code issues
a random pthread_cancel() to a thread handling that, data is lost together
with the session that thread was handling.
This is absolutely not comparable. When read/writ
On 5/7/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
Anyway, we could extend epoll to be mmapable...
Welcome to kevent, well, except with a lot more ballast and awkward interfaces.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTE
Ulrich Drepper wrote:
> On 5/7/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
>> See Linus's message on this same thread.
>
> No. I'm talking about the userlevel side, not kernel side.
So you probably knew the answer before asking the question.
> If a thread is canceled *after* it returns from the
On Mon, 7 May 2007, Ulrich Drepper wrote:
> On 5/5/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
> > A google search turns up a few users. It also addresses some complaints
> > from Drepper.
>
> There is a huge problem with this approach and we're back at the
> inadequate interface.
>
> select/poll
On Mon, 7 May 2007, Ulrich Drepper wrote:
> On 5/7/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
> > See Linus's message on this same thread.
>
> No. I'm talking about the userlevel side, not kernel side.
>
> If a thread is canceled *after* it returns from the syscall but before
> it reports the e
On 5/7/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
See Linus's message on this same thread.
No. I'm talking about the userlevel side, not kernel side.
If a thread is canceled *after* it returns from the syscall but before
it reports the event to the call (i.e., while still in the syscall
wrapp
Ulrich Drepper wrote:
> On 5/5/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
>> A google search turns up a few users. It also addresses some complaints
>> from Drepper.
>
> There is a huge problem with this approach and we're back at the
> inadequate interface.
>
> select/poll/epoll are thread cance
On 5/5/07, Davi Arnaut <[EMAIL PROTECTED]> wrote:
A google search turns up a few users. It also addresses some complaints
from Drepper.
There is a huge problem with this approach and we're back at the
inadequate interface.
select/poll/epoll are thread cancellation points. I.e., the thread
can
On Mon, 7 May 2007, Davide Libenzi wrote:
On Mon, 7 May 2007, Chase Venters wrote:
I'm working on event handling code for multiple projects right now, and my
method of calling epoll_wait() is to do so from several threads. I've glanced
at the epoll code but obviously haven't noticed the wake-a
On Mon, 7 May 2007, Chase Venters wrote:
> I'm working on event handling code for multiple projects right now, and my
> method of calling epoll_wait() is to do so from several threads. I've glanced
> at the epoll code but obviously haven't noticed the wake-all behavior... good
> to know. I suppose
On Sat, 5 May 2007, Davide Libenzi wrote:
On Fri, 4 May 2007, Davi Arnaut wrote:
Hi,
If multiple threads are parked on epoll_wait (on a single epoll fd) and
events become available, epoll performs a wake up of all threads of the
poll wait list, causing a thundering herd of processes trying to
Davide Libenzi wrote:
> On Fri, 4 May 2007, Davi Arnaut wrote:
>
>> Hi,
>>
>> If multiple threads are parked on epoll_wait (on a single epoll fd) and
>> events become available, epoll performs a wake up of all threads of the
>> poll wait list, causing a thundering herd of processes trying to grab
On Fri, 4 May 2007, Davi Arnaut wrote:
> Hi,
>
> If multiple threads are parked on epoll_wait (on a single epoll fd) and
> events become available, epoll performs a wake up of all threads of the
> poll wait list, causing a thundering herd of processes trying to grab
> the eventpoll lock.
>
> Thi
Linus Torvalds a écrit :
On Sat, 5 May 2007, Eric Dumazet wrote:
But... what happens if the thread that was chosen exits from the loop in
ep_poll() with res = -EINTR (because of signal_pending(current))
Not a problem.
What happens is that an exclusive wake-up stops on the first entry in the
On Sat, 5 May 2007, Eric Dumazet wrote:
>
> But... what happens if the thread that was chosen exits from the loop in
> ep_poll() with res = -EINTR (because of signal_pending(current))
Not a problem.
What happens is that an exclusive wake-up stops on the first entry in the
wait-queue that it a
Davi Arnaut a écrit :
Hi,
If multiple threads are parked on epoll_wait (on a single epoll fd) and
events become available, epoll performs a wake up of all threads of the
poll wait list, causing a thundering herd of processes trying to grab
the eventpoll lock.
This patch addresses this by using
Hi,
If multiple threads are parked on epoll_wait (on a single epoll fd) and
events become available, epoll performs a wake up of all threads of the
poll wait list, causing a thundering herd of processes trying to grab
the eventpoll lock.
This patch addresses this by using exclusive waiters (wake
21 matches
Mail list logo