On 06/13/16 22:27, Jann Horn wrote:
> On Mon, Jun 13, 2016 at 10:44:18PM +0300, Topi Miettinen wrote:
>> Track maximum number of processes per user and present it
>> in /proc/self/limits.
>>
>> Signed-off-by: Topi Miettinen <toiwo...@gmail.com>
>> ---
>>  fs/proc/base.c        | 4 ++++
>>  include/linux/sched.h | 1 +
>>  kernel/fork.c         | 5 +++++
>>  kernel/sys.c          | 5 +++++
>>  4 files changed, 15 insertions(+)
>>
>> diff --git a/fs/proc/base.c b/fs/proc/base.c
>> index 1df4fc8..02576c6 100644
>> --- a/fs/proc/base.c
>> +++ b/fs/proc/base.c
>> @@ -670,6 +670,10 @@ static int proc_pid_limits(struct seq_file *m, struct 
>> pid_namespace *ns,
>>                              seq_printf(m, "%-20lu\n", psecs);
>>                      }
>>                      break;
>> +            case RLIMIT_NPROC:
>> +                    seq_printf(m, "%-20d\n",
>> +                               
>> atomic_read(&task->real_cred->user->max_processes));
> 
> Don't you have to take an RCU read lock before dereferencing task->real_cred?

In other comments in the series, cmpxchg loop was suggested, would that
work here?

> And shouldn't this be done with __task_cred(task) instead of task->real_cred?

How about atomic_read(task_cred_xxx(task, user)->max_processes)?

> 
> 
>> +                    break;
>>              default:
>>                      seq_printf(m, "%-20lu\n",
>>                                 task->signal->rlim_curmax[i]);
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index 0150380..feb9bb7 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -838,6 +838,7 @@ static inline int signal_group_exit(const struct 
>> signal_struct *sig)
>>  struct user_struct {
>>      atomic_t __count;       /* reference count */
>>      atomic_t processes;     /* How many processes does this user have? */
>> +    atomic_t max_processes; /* How many processes has this user had at the 
>> same time? */
>>      atomic_t sigpending;    /* How many pending signals does this user 
>> have? */
>>  #ifdef CONFIG_INOTIFY_USER
>>      atomic_t inotify_watches; /* How many inotify watches does this user 
>> have? */
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index 5c2c355..667290f 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -1653,6 +1653,11 @@ static struct task_struct *copy_process(unsigned long 
>> clone_flags,
>>      trace_task_newtask(p, clone_flags);
>>      uprobe_copy_process(p, clone_flags);
>>  
>> +    if (atomic_read(&p->real_cred->user->max_processes) <
>> +        atomic_read(&p->real_cred->user->processes))
>> +            atomic_set(&p->real_cred->user->max_processes,
>> +                       atomic_read(&p->real_cred->user->processes));
>> +
>>      return p;
>>  
>>  bad_fork_cancel_cgroup:
>> diff --git a/kernel/sys.c b/kernel/sys.c
>> index 6629f6f..955cf21 100644
>> --- a/kernel/sys.c
>> +++ b/kernel/sys.c
>> @@ -439,6 +439,11 @@ static int set_user(struct cred *new)
>>      else
>>              current->flags &= ~PF_NPROC_EXCEEDED;
>>  
>> +    if (atomic_read(&new_user->max_processes) <
>> +        atomic_read(&new_user->processes))
>> +            atomic_set(&new_user->max_processes,
>> +                       atomic_read(&new_user->processes));
>> +
> 
> Is this intentionally slightly racy? If so, it might be nice to have a comment
> here that documents that.
> 

I'd suppose cmpxchg loop could be used to avoid races.

-Topi

Reply via email to