On Thu, Oct 11, 2007 at 06:19:34PM +0100, richard kennedy wrote:
> When background_writeout() (mm/page-writeback.c) finds any pages_skipped
> in writeback_inodes() and it didn't meet any congestion, it exits even
> when it hasn't written enough pages yet.
> 
> Performing 2 ( or more) concurrent copies of a large file, often creates
> lots of skipped pages (1000+) making background_writeout exit and so
> pages don't get written out until we reach dirty_ratio.
> 
> I added some instrumentation to fs/buffer.c in
> __block_write_full_page(..) and all the skipped pages come from here :-
> 
> done:
>       if (nr_underway == 0) {
>               /*
>                * The page was marked dirty, but the buffers were
>                * clean.  Someone wrote them back by hand with
>                * ll_rw_block/submit_bh.  A rare case.
>                */
>               end_page_writeback(page);
> 
>               /*
>                * The page and buffer_heads can be released at any time from
>                * here on.
>                */

>               wbc->pages_skipped++;   /* We didn't write this page */

FYI: The above line has just been removed in 2.6.23-mm1, which fixed the bug.

Thank you,
Fengguang

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to