On Mon, 10 Sep 2012 09:19:20 -0700
Tim Chen <tim.c.c...@linux.intel.com> wrote:

> This is the second version of the patch series. Thanks to Matthew Wilcox 
> for many valuable suggestions on improving the patches.
> 
> To do page reclamation in shrink_page_list function, there are two
> locks taken on a page by page basis.  One is the tree lock protecting
> the radix tree of the page mapping and the other is the
> mapping->i_mmap_mutex protecting the mapped
> pages.  I try to batch the operations on pages sharing the same lock
> to reduce lock contentions.  The first patch batch the operations protected by
> tree lock while the second and third patch batch the operations protected by 
> the i_mmap_mutex.
> 
> I managed to get 14% throughput improvement when with a workload putting
> heavy pressure of page cache by reading many large mmaped files
> simultaneously on a 8 socket Westmere server.

That sounds good, although more details on the performance changes
would be appreciated - after all, that's the entire point of the
patchset.

And we shouldn't only test for improvements - we should also test for
degradation.  What workloads might be harmed by this change?  I'd suggest

- a single process which opens N files and reads one page from each
  one, then repeats.  So there are no contiguous LRU pages which share
  the same ->mapping.  Get some page reclaim happening, measure the
  impact.

- The batching means that we now do multiple passes over pageframes
  where we used to do things in a single pass.  Walking all those new
  page lists will be expensive if they are lengthy enough to cause L1
  cache evictions.

  What would be a test for this?  A simple, single-threaded walk
  through a file, I guess?

Mel's review comments were useful, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to