----- On Mar 28, 2018, at 7:19 AM, Peter Zijlstra pet...@infradead.org wrote:

> On Tue, Mar 27, 2018 at 12:05:23PM -0400, Mathieu Desnoyers wrote:
>> +#ifdef CONFIG_RSEQ
>> +    struct rseq __user *rseq;
>> +    u32 rseq_len;
>> +    u32 rseq_sig;
>> +    /*
>> +     * RmW on rseq_event_mask must be performed atomically
>> +     * with respect to preemption.
>> +     */
>> +    unsigned long rseq_event_mask;
>> +#endif
> 
>> +static inline void rseq_signal_deliver(struct pt_regs *regs)
>> +{
>> +    set_bit(RSEQ_EVENT_SIGNAL_BIT, &current->rseq_event_mask);
>> +    rseq_handle_notify_resume(regs);
>> +}
>> +
>> +static inline void rseq_preempt(struct task_struct *t)
>> +{
>> +    set_bit(RSEQ_EVENT_PREEMPT_BIT, &t->rseq_event_mask);
>> +    rseq_set_notify_resume(t);
>> +}
>> +
>> +static inline void rseq_migrate(struct task_struct *t)
>> +{
>> +    set_bit(RSEQ_EVENT_MIGRATE_BIT, &t->rseq_event_mask);
>> +    rseq_set_notify_resume(t);
>> +}
> 
> Given that comment above, do you really need the full atomic set bit?
> Isn't __set_bit() sufficient?

For each of rseq_signal_deliver, rseq_preempt, and rseq_migrate, we should
confirm that their callers guarantee preemption is disabled before
we can use __set_bit() in each of those functions.

Is that the case ? If so, we should also document the requirement
about preemption for each function.

AFAIU, rseq_migrate is only invoked from __set_task_cpu, which I *think*
always has preemption disabled. rseq_preempt() is called by the scheduler,
so this one is fine. On x86, rseq_signal_deliver is called from setup_rt_frame,
with preemption enabled.

So one approach would be to use __set_bit in both rseq_preempt and rseq_migrate,
but keep the atomic set_bit() in rseq_signal_deliver.

Thoughts ?

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Reply via email to