Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-05 Thread Dave Hansen
On 06/05/2013 12:28 AM, Hillf Danton wrote: >> > + mapping = page_mapping(lru_to_page(remove_list)); >> > + spin_lock_irq(&mapping->tree_lock); >> > + while (!list_empty(remove_list)) { >> > + page = lru_to_page(remove_list); >> > +

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-05 Thread Minchan Kim
On Wed, Jun 05, 2013 at 03:28:27PM +0800, Hillf Danton wrote: > On Tue, Jun 4, 2013 at 1:07 PM, Minchan Kim wrote: > > On Tue, Jun 04, 2013 at 09:17:26AM +0800, Hillf Danton wrote: > >> On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote: > >> > +/* > >> > + * pages come in here (via remove_list) l

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-05 Thread Hillf Danton
On Tue, Jun 4, 2013 at 1:07 PM, Minchan Kim wrote: > On Tue, Jun 04, 2013 at 09:17:26AM +0800, Hillf Danton wrote: >> On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote: >> > +/* >> > + * pages come in here (via remove_list) locked and leave unlocked >> > + * (on either ret_pages or free_pages) >>

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-04 Thread Minchan Kim
On Tue, Jun 04, 2013 at 08:29:18AM -0700, Dave Hansen wrote: > On 06/03/2013 11:02 PM, Minchan Kim wrote: > >> > Why do we need new lru list instead of using @free_pages? > > I got your point that @free_pages could have freed page by > > put_page_testzero of shrink_page_list and they don't have > >

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-04 Thread Dave Hansen
On 06/03/2013 11:02 PM, Minchan Kim wrote: >> > Why do we need new lru list instead of using @free_pages? > I got your point that @free_pages could have freed page by > put_page_testzero of shrink_page_list and they don't have > valid mapping so __remove_mapping_batch's mapping_release_page > would

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-04 Thread Dave Hansen
On 06/03/2013 10:07 PM, Minchan Kim wrote: >>> > > + while (!list_empty(remove_list)) { >>> > > + page = lru_to_page(remove_list); >>> > > + BUG_ON(!PageLocked(page)); >>> > > + BUG_ON(page_mapping(page) != mapping); >>> > > + list_del(&

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-04 Thread Minchan Kim
On Mon, Jun 03, 2013 at 11:10:02PM -0700, Dave Hansen wrote: > On 06/03/2013 10:01 PM, Minchan Kim wrote: > >> > +static int __remove_mapping_batch(struct list_head *remove_list, > >> > + struct list_head *ret_pages, > >> > + struct

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-03 Thread Dave Hansen
On 06/03/2013 10:01 PM, Minchan Kim wrote: >> > +static int __remove_mapping_batch(struct list_head *remove_list, >> > +struct list_head *ret_pages, >> > +struct list_head *free_pages) >> > +{ >> > + int nr_reclaimed = 0; >> > + struct addre

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-03 Thread Minchan Kim
On Tue, Jun 04, 2013 at 02:01:03PM +0900, Minchan Kim wrote: > On Mon, Jun 03, 2013 at 01:02:08PM -0700, Dave Hansen wrote: > > > > From: Dave Hansen > > changes for v2: > > * remove batch_has_same_mapping() helper. A local varible makes > >the check cheaper and cleaner > > * Move batch dr

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-03 Thread Minchan Kim
On Tue, Jun 04, 2013 at 09:17:26AM +0800, Hillf Danton wrote: > On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote: > > +/* > > + * pages come in here (via remove_list) locked and leave unlocked > > + * (on either ret_pages or free_pages) > > + * > > + * We do this batching so that we free batches

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-03 Thread Minchan Kim
On Mon, Jun 03, 2013 at 01:02:08PM -0700, Dave Hansen wrote: > > From: Dave Hansen > changes for v2: > * remove batch_has_same_mapping() helper. A local varible makes >the check cheaper and cleaner > * Move batch draining later to where we already know >page_mapping(). This probably f

Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-03 Thread Hillf Danton
On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote: > +/* > + * pages come in here (via remove_list) locked and leave unlocked > + * (on either ret_pages or free_pages) > + * > + * We do this batching so that we free batches of pages with a > + * single mapping->tree_lock acquisition/release. This

[v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations

2013-06-03 Thread Dave Hansen
From: Dave Hansen changes for v2: * remove batch_has_same_mapping() helper. A local varible makes the check cheaper and cleaner * Move batch draining later to where we already know page_mapping(). This probably fixes a truncation race anyway * rename batch_for_mapping_removal -> batch_