When doing aio ring page migration, we migrated the page, and update ctx->ring_pages[]. Like the following:
aio_migratepage() |-> migrate_page_copy(new, old) | ...... /* Need barrier here */ |-> ctx->ring_pages[idx] = new Actually, we need a memory barrier between these two operations. Otherwise, if ctx->ring_pages[] is updated before memory copy due to the compiler optimization, other processes may have an opportunity to access to the not fully initialized new ring page. So add a wmb and rmb to synchronize them. Signed-off-by: Yasuaki Ishimatsu <isimatu.yasu...@jp.fujitsu.com> Signed-off-by: Tang Chen <tangc...@cn.fujitsu.com> --- v2: change smp_rmb() to smp_read_barrier_depends(). Thanks Miao. --- fs/aio.c | 14 ++++++++++++++ 1 files changed, 14 insertions(+), 0 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index dc70246..4133ba9 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -327,6 +327,14 @@ static int aio_migratepage(struct address_space *mapping, struct page *new, pgoff_t idx; spin_lock_irqsave(&ctx->completion_lock, flags); migrate_page_copy(new, old); + + /* + * Ensure memory copy is finished before updating + * ctx->ring_pages[]. Otherwise other processes may access to + * new ring pages which are not fully initialized. + */ + smp_wmb(); + idx = old->index; if (idx < (pgoff_t)ctx->nr_pages) { /* And only do the move if things haven't changed */ @@ -1069,6 +1077,12 @@ static long aio_read_events_ring(struct kioctx *ctx, page = ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]; pos %= AIO_EVENTS_PER_PAGE; + /* + * Ensure that the page's data was copied from old one by + * aio_migratepage(). + */ + smp_read_barrier_depends(); + ev = kmap(page); copy_ret = copy_to_user(event + ret, ev + pos, sizeof(*ev) * avail); -- 1.7.7 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/