Currently, __vunmap flow is,
 1) Release the VM area
 2) Free the debug objects corresponding to that vm area.

This leave some race window open.
 1) Release the VM area
 1.5) Some other client gets the same vm area
 1.6) This client allocates new debug objects on the same
      vm area
 2) Free the debug objects corresponding to this vm area.

Here, we actually free 'other' client's debug objects.

Fix this by freeing the debug objects first and then
releasing the VM area.

Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
 mm/vmalloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6729400..12d675c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1500,7 +1500,7 @@ static void __vunmap(const void *addr, int 
deallocate_pages)
                        addr))
                return;
 
-       area = remove_vm_area(addr);
+       area = find_vmap_area((unsigned long)addr)->vm;
        if (unlikely(!area)) {
                WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
                                addr);
@@ -1510,6 +1510,7 @@ static void __vunmap(const void *addr, int 
deallocate_pages)
        debug_check_no_locks_freed(addr, get_vm_area_size(area));
        debug_check_no_obj_freed(addr, get_vm_area_size(area));
 
+       remove_vm_area(addr);
        if (deallocate_pages) {
                int i;
 
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation
Center, Inc., is a member of Code Aurora Forum, a Linux Foundation
Collaborative Project

Reply via email to