On Mon, Jun 24, 2013 at 09:15:45AM +0200, Jens Axboe wrote: > Willy, I think the general design is fine, hooking in via the bdi is the > only way to get back to the right place from where you need to sleep. > Some thoughts: > > - This should be hooked in via blk-iopoll, both of them should call into > the same driver hook for polling completions.
I actually started working on this, then I realised that it's actually a bad idea. blk-iopoll's poll function is to poll the single I/O queue closest to this CPU. The iowait poll function is to poll all queues that the I/O for this address_space might complete on. I'm reluctant to ask drivers to define two poll functions, but I'm even more reluctant to ask them to define one function with two purposes. > - It needs to be more intelligent in when you want to poll and when you > want regular irq driven IO. Oh yeah, absolutely. While the example patch didn't show it, I wouldn't enable it for all NVMe devices; only ones with sufficiently low latency. There's also the ability for the driver to look at the number of outstanding I/Os and return an error (eg -EBUSY) to stop spinning. > - With the former note, the app either needs to opt in (and hence > willingly sacrifice CPU cycles of its scheduling slice) or it needs to > be nicer in when it gives up and goes back to irq driven IO. Yup. I like the way you framed it. If the task *wants* to spend its CPU cycles on polling for I/O instead of giving up the remainder of its time slice, then it should be able to do that. After all, it already can; it can submit an I/O request via AIO, and then call io_getevents in a tight loop. So maybe the right way to do this is with a task flag? If we go that route, I'd like to further develop this option to allow I/Os to be designated as "low latency" vs "normal". Taking a page fault would be "low latency" for all tasks, not just ones that choose to spin for I/O. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/