On 06/05/2013 12:28 AM, Hillf Danton wrote:
>> > + mapping = page_mapping(lru_to_page(remove_list));
>> > + spin_lock_irq(&mapping->tree_lock);
>> > + while (!list_empty(remove_list)) {
>> > + page = lru_to_page(remove_list);
>> > +
On Wed, Jun 05, 2013 at 03:28:27PM +0800, Hillf Danton wrote:
> On Tue, Jun 4, 2013 at 1:07 PM, Minchan Kim wrote:
> > On Tue, Jun 04, 2013 at 09:17:26AM +0800, Hillf Danton wrote:
> >> On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote:
> >> > +/*
> >> > + * pages come in here (via remove_list) l
On Tue, Jun 4, 2013 at 1:07 PM, Minchan Kim wrote:
> On Tue, Jun 04, 2013 at 09:17:26AM +0800, Hillf Danton wrote:
>> On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote:
>> > +/*
>> > + * pages come in here (via remove_list) locked and leave unlocked
>> > + * (on either ret_pages or free_pages)
>>
On Tue, Jun 04, 2013 at 08:29:18AM -0700, Dave Hansen wrote:
> On 06/03/2013 11:02 PM, Minchan Kim wrote:
> >> > Why do we need new lru list instead of using @free_pages?
> > I got your point that @free_pages could have freed page by
> > put_page_testzero of shrink_page_list and they don't have
> >
On 06/03/2013 11:02 PM, Minchan Kim wrote:
>> > Why do we need new lru list instead of using @free_pages?
> I got your point that @free_pages could have freed page by
> put_page_testzero of shrink_page_list and they don't have
> valid mapping so __remove_mapping_batch's mapping_release_page
> would
On 06/03/2013 10:07 PM, Minchan Kim wrote:
>>> > > + while (!list_empty(remove_list)) {
>>> > > + page = lru_to_page(remove_list);
>>> > > + BUG_ON(!PageLocked(page));
>>> > > + BUG_ON(page_mapping(page) != mapping);
>>> > > + list_del(&
On Mon, Jun 03, 2013 at 11:10:02PM -0700, Dave Hansen wrote:
> On 06/03/2013 10:01 PM, Minchan Kim wrote:
> >> > +static int __remove_mapping_batch(struct list_head *remove_list,
> >> > + struct list_head *ret_pages,
> >> > + struct
On 06/03/2013 10:01 PM, Minchan Kim wrote:
>> > +static int __remove_mapping_batch(struct list_head *remove_list,
>> > +struct list_head *ret_pages,
>> > +struct list_head *free_pages)
>> > +{
>> > + int nr_reclaimed = 0;
>> > + struct addre
On Tue, Jun 04, 2013 at 02:01:03PM +0900, Minchan Kim wrote:
> On Mon, Jun 03, 2013 at 01:02:08PM -0700, Dave Hansen wrote:
> >
> > From: Dave Hansen
> > changes for v2:
> > * remove batch_has_same_mapping() helper. A local varible makes
> >the check cheaper and cleaner
> > * Move batch dr
On Tue, Jun 04, 2013 at 09:17:26AM +0800, Hillf Danton wrote:
> On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote:
> > +/*
> > + * pages come in here (via remove_list) locked and leave unlocked
> > + * (on either ret_pages or free_pages)
> > + *
> > + * We do this batching so that we free batches
On Mon, Jun 03, 2013 at 01:02:08PM -0700, Dave Hansen wrote:
>
> From: Dave Hansen
> changes for v2:
> * remove batch_has_same_mapping() helper. A local varible makes
>the check cheaper and cleaner
> * Move batch draining later to where we already know
>page_mapping(). This probably f
On Tue, Jun 4, 2013 at 4:02 AM, Dave Hansen wrote:
> +/*
> + * pages come in here (via remove_list) locked and leave unlocked
> + * (on either ret_pages or free_pages)
> + *
> + * We do this batching so that we free batches of pages with a
> + * single mapping->tree_lock acquisition/release. This
From: Dave Hansen
changes for v2:
* remove batch_has_same_mapping() helper. A local varible makes
the check cheaper and cleaner
* Move batch draining later to where we already know
page_mapping(). This probably fixes a truncation race anyway
* rename batch_for_mapping_removal -> batch_
13 matches
Mail list logo