On Thu, Apr 1, 2010 at 12:52 PM, Jerome Glisse <gli...@freedesktop.org> wrote:
> On Sun, Mar 28, 2010 at 09:16:01PM +0300, Pauli Nieminen wrote:
>> On AGP system we might allocate/free routinely uncached or wc memory,
>> changing page from cached (wb) to uc or wc is very expensive and involves
>> a lot of flushing. To improve performance this allocator use a pool
>> of uc,wc pages.
>>
>> Pools are protected with spinlocks to allow multiple threads to allocate 
>> pages
>> simultanously. Expensive operations are done outside of spinlock to maximize
>> concurrency.
>>
>> Pools are linked lists of pages that were recently freed. mm shrink callback
>> allows kernel to claim back pages when they are required for something else.
>>
>> Fixes:
>> * set_pages_array_wb handles highmem pages so we don't have to remove them
>>   from pool.
>> * Add count parameter to ttm_put_pages to avoid looping in free code.
>> * Change looping from _safe to normal in pool fill error path.
>> * Initialize sum variable and make the loop prettier in get_num_unused_pages
>>
>> Based on Jerome Glisse's and Dave Airlie's pool allocator.
>>
>> Signed-off-by: Jerome Glisse <jgli...@redhat.com>
>> Signed-off-by: Dave Airlie <airl...@redhat.com>
>> Signed-off-by: Pauli Nieminen <suok...@gmail.com>
>
> I think there is only one issue left see below, once yo got that one
> fixed you got my ACK
>
> Cheers,
> Jerome
>
>> +
>> +/**
>> + * Free pages from pool.
>> + *
>> + * To prevent hogging the ttm_swap process we only free NUM_PAGES_TO_ALLOC
>> + * number of pages in one go.
>> + *
>> + * @pool: to free the pages from
>> + * @free_all: If set to true will free all pages in pool
>> + **/
>> +static int ttm_page_pool_free(struct ttm_page_pool *pool, unsigned nr_free)
>> +{
>> +     unsigned long irq_flags;
>> +     struct page *p;
>> +     struct page **pages_to_free;
>> +     unsigned freed_pages, npages_to_free = nr_free;
>> +     if (NUM_PAGES_TO_ALLOC < nr_free)
>> +             npages_to_free = NUM_PAGES_TO_ALLOC;
>> +
>> +     pages_to_free = kmalloc(npages_to_free * sizeof(struct page *),
>> +                     GFP_KERNEL);
>> +     if (!pages_to_free) {
>> +             printk(KERN_ERR "Failed to allocate memory for pool free 
>> operation.\n");
>> +             return 0;
>> +     }
>> +
>> +restart:
>> +     spin_lock_irqsave(&pool->lock, irq_flags);
>> +
>> +     freed_pages = 0;
>> +
>> +     list_for_each_entry_reverse(p, &pool->list, lru) {
>> +             if (freed_pages >= npages_to_free)
>> +                     break;
>> +
>> +             pages_to_free[freed_pages++] = p;
>> +             /* We can only remove NUM_PAGES_TO_ALLOC at a time. */
>> +             if (freed_pages >= NUM_PAGES_TO_ALLOC) {
>> +                     /* remove range of page sfrom the pool */
>> +                     __list_del(p->lru.prev, &pool->list);
>> +
>> +                     ttm_pool_update_free_locked(pool, freed_pages);
>> +                     /**
>> +                      * Because changing page caching is costly
>> +                      * we unlock the pool to prevent stalling.
>> +                      */
>> +                     spin_unlock_irqrestore(&pool->lock, irq_flags);
>> +
>> +                     ttm_pages_put(pages_to_free, freed_pages);
>> +                     if (likely(nr_free != FREE_ALL_PAGES))
>> +                             nr_free -= freed_pages;
>> +
>> +                     if (NUM_PAGES_TO_ALLOC >= nr_free)
>> +                             npages_to_free = nr_free;
>> +                     else
>> +                             npages_to_free = NUM_PAGES_TO_ALLOC;
>> +
>> +                     /* free all so restart the processing */
>> +                     if (nr_free)
>> +                             goto restart;
>> +
>
> (1) (see below)
>
>> +                     goto out;
>> +
>> +             }
>> +     }
>> +
>> +
>> +     /* remove range of pages from the pool */
>> +     if (freed_pages) {
>> +             __list_del(&p->lru, &pool->list);
>> +
>> +             ttm_pool_update_free_locked(pool, freed_pages);
>> +             nr_free -= freed_pages;
>> +     }
>> +
>
> if we reach this if through (1) it will fails big time as p has
> already been removed from the list (list_del not list_del_init)
> I think all you need is set freed_page to 0 at (1).
>

This is impossible code path without freed_pages being at correct
value. Because freed_pages is set to zero in restart code path.

But I guess it would be simpler code if setting zero is done inside
loop before restarting the loop.

>
>> +     spin_unlock_irqrestore(&pool->lock, irq_flags);
>> +
>> +     if (freed_pages)
>> +             ttm_pages_put(pages_to_free, freed_pages);
>> +out:

We jump from (1) to here if there is no more work to do.

Reason is that in (1) we are outside of spin_lock so we can't jump
into spin_lock from there.

>> +     kfree(pages_to_free);
>> +     return nr_free;
>> +}
>> +

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to