On Tue, Jan 6, 2015 at 10:42 AM, Sedat Dilek <sedat.di...@gmail.com> wrote:
> On Tue, Jan 6, 2015 at 10:40 AM, Peter Zijlstra <pet...@infradead.org> wrote:
>> On Tue, Jan 06, 2015 at 05:49:11AM +0100, Sedat Dilek wrote:
>>> This has been there since just before rc1. Is there a fix for this
>>> stalled in someones git tree maybe ?
>>>
>>> [    7.952588] WARNING: CPU: 0 PID: 299 at kernel/sched/core.c:7303
>>> __might_sleep+0x8d/0xa0()
>>> [    7.952592] do not call blocking ops when !TASK_RUNNING; state=1 set at 
>>> [<ffffffff910a0f7a>] prepare_to_wait+0x2a/0x90
>>> [    7.952595] CPU: 0 PID: 299 Comm: systemd-readahe Not tainted 
>>> 3.19.0-rc3+ #100
>>
>>> [    7.952620]  [<ffffffff911a63e0>] fanotify_read+0xe0/0x5b0
>>
>>
>> http://marc.info/?l=linux-kernel&m=141874374029791
>
> Hehe, I created same fix... did not help here.
>

>From my call-trace...

[   88.028739]  [<ffffffff8124433f>] aio_read_events+0x4f/0x2d0

...and having a quick look at read_events() in...

"
         * But aio_read_events() can block, and if it blocks it's going to flip
         * the task state back to TASK_RUNNING.
"

[ fs/aio.c ]
...
static long read_events(struct kioctx *ctx, long min_nr, long nr,
                        struct io_event __user *event,
                        struct timespec __user *timeout)
{
        ktime_t until = { .tv64 = KTIME_MAX };
        long ret = 0;

        if (timeout) {
                struct timespec ts;

                if (unlikely(copy_from_user(&ts, timeout, sizeof(ts))))
                        return -EFAULT;

                until = timespec_to_ktime(ts);
        }

        /*
         * Note that aio_read_events() is being called as the conditional - i.e.
         * we're calling it after prepare_to_wait() has set task state to
         * TASK_INTERRUPTIBLE.
         *
         * But aio_read_events() can block, and if it blocks it's going to flip
         * the task state back to TASK_RUNNING.
         *
         * This should be ok, provided it doesn't flip the state back to
         * TASK_RUNNING and return 0 too much - that causes us to spin. That
         * will only happen if the mutex_lock() call blocks, and we then find
         * the ringbuffer empty. So in practice we should be ok, but it's
         * something to be aware of when touching this code.
         */
        if (until.tv64 == 0)
                aio_read_events(ctx, min_nr, nr, event, &ret);
        else
                wait_event_interruptible_hrtimeout(ctx->wait,
                                aio_read_events(ctx, min_nr, nr, event, &ret),
                                until);

        if (!ret && signal_pending(current))
                ret = -EINTR;

        return ret;
}
...

I have not the right skillz to look at this deeper.

- Sedat -
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to