On Thu, May 29, 2014 at 06:24:02PM -0700, Linus Torvalds wrote: > On Thu, May 29, 2014 at 5:50 PM, Minchan Kim <minc...@kernel.org> wrote: > >> > >> You could also try Dave's patch, and _not_ do my mm/vmscan.c part. > > > > Sure. While I write this, Rusty's test was crached so I will try Dave's > > patch, > > them yours except vmscan.c part. > > Looking more at Dave's patch (well, description), I don't think there > is any way in hell we can ever apply it. If I read it right, it will > cause all IO that overflows the max request count to go through the > scheduler to get it flushed. Maybe I misread it, but that's definitely > not acceptable. Maybe it's not noticeable with a slow rotational > device, but modern ssd hardware? No way. > > I'd *much* rather slow down the swap side. Not "real IO". So I think > my mm/vmscan.c patch is preferable (but yes, it might require some > work to make kswapd do better). > > So you can try Dave's patch just to see what it does for stack depth, > but other than that it looks unacceptable unless I misread things.
Yeah, it's a hack, not intended as a potential solution. I'm thinking, though, that plug flushing behaviour is actually dependent on plugger context and there is no one "correct" behaviour. If we are doing process driven IO, then we want to do immediate dispatch, but for IO where stack is an issue or is for bulk throughput (e.g. background writeback) async dispatch through kblockd is desirable. If the patch I sent solves the swap stack usage issue, then perhaps we should look towards adding "blk_plug_start_async()" to pass such hints to the plug flushing. I'd want to use the same behaviour in __xfs_buf_delwri_submit() for bulk metadata writeback in XFS, and probably also in mpage_writepages() for bulk data writeback in WB_SYNC_NONE context.... Cheers, Dave. -- Dave Chinner da...@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/