From: Andrew Perepechko <pa...@cloudlinux.com>

Currently, shmem_lock immediately and
unconditionally uncharges what it has
just charged for a lock request.

This, indeed, causes a double uncharge
with something like the following:

  shmid = shmget(12345, 8192, IPC_CREAT | 0666);
  rc = shmctl(shmid, SHM_LOCK, NULL);
  shmctl(shmid, IPC_RMID, 0);

with the following in the kernel log:

[  455.815025] Uncharging too much 2 h 0, res lockedpages ub 0

Signed-off-by: Andrew Perepechko <pa...@cloudlinux.com>
---
 mm/shmem.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index a6b3e30..d09a230 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1359,11 +1359,13 @@ int shmem_lock(struct file *file, int lock, struct 
user_struct *user)
                mapping_set_unevictable(file->f_mapping);
        }
        if (!lock && (info->flags & VM_LOCKED) && user) {
+               ub_lockedshm_uncharge(info, inode->i_size);
                user_shm_unlock(inode->i_size, user);
                info->flags &= ~VM_LOCKED;
                mapping_clear_unevictable(file->f_mapping);
        }
-       retval = 0;
+       spin_unlock(&info->lock);
+       return 0;
 
 out_nomem:
        ub_lockedshm_uncharge(info, inode->i_size);
-- 
1.9.1

_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to