On Thu, Aug 21, 2014 at 10:58 AM, Jens Axboe <[email protected]> wrote: > On 2014-08-20 21:54, Ming Lei wrote: >>>> >>>> From my investigation, context switch increases almost 50% with >>>> workqueue compared with kthread in loop in a quad-core VM. With >>>> kthread, requests may be handled as batch in cases which won't be >>>> blocked in read()/write()(like null_blk, tmpfs, ...), but it is >>>> impossible >>>> with >>>> workqueue any more. Also block plug&unplug should have been used >>>> with kthread to optimize the case, especially when kernel AIO is >>>> applied, >>>> still impossible with work queue too. >>> >>> >>> >>> OK, that one is actually a good point, since one need not do per-item >>> queueing. We could handle different units, though. And we should have >>> proper >>> marking of the last item in a chain of stuff, so we might even be able to >>> offload based on that instead of doing single items. It wont help the >>> sync >>> case, but for that, workqueue and kthread would be identical. >> >> >> We may do that by introducing callback of queue_rq_list in blk_mq_ops, >> and I will figure out one patch today to see if it can help the case. > > > I don't think we should add to the interface, I prefer keeping it clean like > it is right now. At least not if we can get around it. My point is that the > driver already knows when the chain is complete, when REQ_LAST is set. So > before that event triggers, it need not kick off IO, or at least i could do > it in batches before that. That may not be fully reliable in case of > queueing errors, but if REQ_LAST or 'error return' is used as the way to > kick off pending IO, then that should be good enough. Haven't audited this > in a while, but at least that is the intent of REQ_LAST.
Another point is that running N queue_work(rq) may cost more than running one time queue_work(N rqs) since context still may switch back and forth when executing queue_work(). Anyway I need to run test first to see if it can bring back throughout on sequential read by handling them as batch. Thanks, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

