On 08/02/2016 05:42 PM, Nikolay Borisov wrote:
> Currently when /proc/locks is read it will show all the file locks
> which are currently created on the machine. On containers, hosted
> on busy servers this means that doing lsof can be very slow. I
> observed up to 5 seconds stalls reading 50k locks, while the container
> itself had only a small number of relevant entries. Fix it by
> filtering the locks listed by the pidns of the current process
> and the process which created the lock.
> 
> Signed-off-by: Nikolay Borisov <ker...@kyup.com>
> ---
>  fs/locks.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/fs/locks.c b/fs/locks.c
> index 6333263b7bc8..53e96df4c583 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -2615,9 +2615,17 @@ static int locks_show(struct seq_file *f, void *v)
>  {
>       struct locks_iterator *iter = f->private;
>       struct file_lock *fl, *bfl;
> +     struct pid_namespace *pid_ns = task_active_pid_ns(current);
> +
>  
>       fl = hlist_entry(v, struct file_lock, fl_link);
>  
> +     pr_info ("Current pid_ns: %p init_pid_ns: %p, fl->fl_nspid: %p 
> nspidof:%p\n", pid_ns, &init_pid_ns,
> +              fl->fl_nspid, ns_of_pid(fl->fl_nspid));

Obviously I don't intend on including that in the final submission.

> +     if ((pid_ns != &init_pid_ns) && fl->fl_nspid &&
> +             (pid_ns != ns_of_pid(fl->fl_nspid)))
> +                 return 0;
> +
>       lock_get_status(f, fl, iter->li_pos, "");
>  
>       list_for_each_entry(bfl, &fl->fl_block, fl_block)
> 

Reply via email to