I see now that this is redundant with Robin's patch series "Optimise
64-bit IOVA allocations". I tested those patches on our platform and
find that they solve the performance problem we were having. So, I'd
like to withdraw this patch.
On 9/27/2017 10:10 AM, Joerg Roedel wrote:
Adding
I see now that this is redundant with Robin's patch series "Optimise
64-bit IOVA allocations". I tested those patches on our platform and
find that they solve the performance problem we were having. So, I'd
like to withdraw this patch.
On 9/27/2017 10:10 AM, Joerg Roedel wrote:
Adding
Adding Robin.
Robin, can you please have a look?
On Wed, Sep 20, 2017 at 11:28:19AM -0400, David Woods wrote:
> When allocating pages with alloc_iova() where limit_pfn > dma_32bit_pfn
> __alloc_and_insert_iova_range does a linear traversal of the tree to
> find a free block. In the worst case
Adding Robin.
Robin, can you please have a look?
On Wed, Sep 20, 2017 at 11:28:19AM -0400, David Woods wrote:
> When allocating pages with alloc_iova() where limit_pfn > dma_32bit_pfn
> __alloc_and_insert_iova_range does a linear traversal of the tree to
> find a free block. In the worst case
When allocating pages with alloc_iova() where limit_pfn > dma_32bit_pfn
__alloc_and_insert_iova_range does a linear traversal of the tree to
find a free block. In the worst case it makes the alloc O(n) for each
page, where n is the number of pages allocated so far. The worst case
turns out to be
When allocating pages with alloc_iova() where limit_pfn > dma_32bit_pfn
__alloc_and_insert_iova_range does a linear traversal of the tree to
find a free block. In the worst case it makes the alloc O(n) for each
page, where n is the number of pages allocated so far. The worst case
turns out to be
6 matches
Mail list logo