On Wed, Mar 25, 2015 at 7:00 AM, Andrew Morton <a...@linux-foundation.org> wrote: > On Thu, 19 Mar 2015 23:04:39 +0900 Roman Pen <r.peni...@gmail.com> wrote: > >> If suitable block can't be found, new block is allocated and put into a head >> of a free list, so on next iteration this new block will be found first. >> >> ... >> >> Cc: sta...@vger.kernel.org >> >> ... >> >> --- a/mm/vmalloc.c >> +++ b/mm/vmalloc.c >> @@ -837,7 +837,7 @@ static struct vmap_block *new_vmap_block(gfp_t gfp_mask) >> >> vbq = &get_cpu_var(vmap_block_queue); >> spin_lock(&vbq->lock); >> - list_add_rcu(&vb->free_list, &vbq->free); >> + list_add_tail_rcu(&vb->free_list, &vbq->free); >> spin_unlock(&vbq->lock); >> put_cpu_var(vmap_block_queue); >> > > I'm not sure about the cc:stable here. There is potential for > unexpected side-effects
Only one potential side-effect I see is that allocator has to iterate up to 6 (7 on 64-bit systems) blocks in a free list two times. The second patch fixes this by occupying the block right away after allocation. But even the second patch is not applied - iterating 6 (7) blocks (and this is the worst and rare case) is not a big deal comparing to the size of a free list, which increases over time, if this patch was not applied. I can compare the behaviour of the allocator, which puts new blocks to the head of a free list, with the tetris game: sooner or later coming blocks will reach the top, and you will lose, even if you are the champion. > and I don't *think* people are hurting from > this issue in real life. Or maybe I'm wrong about that? Yes, probably they are not. I showed one special synthetic scenario, which works pretty well and exhausts the virtual space very fast, another scenario is a random one, which also works, but very slow. I think drivers tend only to preallocate (not frequent usage) or to pass sequential sizes to vm_map_ram. In these cases everything will be fine. Also free list is a CPU variable. Good and fast reproduction happens only if you bind a vm_map_ram call to the CPU or use uniprocessor system. Probably the conjunction of all of these reasons hid the problem for a long time. But I tend to think that this is a bug, long-standing bug. -- Roman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/