----- On Mar 28, 2018, at 10:06 AM, Mathieu Desnoyers 
[email protected] wrote:

> ----- On Mar 28, 2018, at 2:47 AM, Boqun Feng [email protected] wrote:
> 
>> On Tue, Mar 27, 2018 at 12:05:23PM -0400, Mathieu Desnoyers wrote:
>> [...]
>>> Changes since v11:
>>> 
>>> - Replace task struct rseq_preempt, rseq_signal, and rseq_migrate
>>>   bool by u32 rseq_event_mask.
>> [...]
>>> @@ -979,6 +980,17 @@ struct task_struct {
>>>     unsigned long                   numa_pages_migrated;
>>>  #endif /* CONFIG_NUMA_BALANCING */
>>>  
>>> +#ifdef CONFIG_RSEQ
>>> +   struct rseq __user *rseq;
>>> +   u32 rseq_len;
>>> +   u32 rseq_sig;
>>> +   /*
>>> +    * RmW on rseq_event_mask must be performed atomically
>>> +    * with respect to preemption.
>>> +    */
>>> +   unsigned long rseq_event_mask;
>> 
>> s/unsigned long/u32
> 
> good point, fixed.
> 

Actually, by having a u32 instead of unsigned long here, it triggers those
warnings:

In file included from ./include/linux/bitops.h:38:0,
                 from ./include/linux/kernel.h:11,
                 from certs/system_keyring.c:13:
./arch/x86/include/asm/bitops.h:73:1: note: expected ‘volatile long unsigned 
int *’ but argument is of type ‘u32 *’
 set_bit(long nr, volatile unsigned long *addr)
 ^

I suspect that casting the u32 * to a unsigned long * is not a safe approach, 
because
the code can generate a load/store on unallocated memory (kasan might complain).

Thoughts ?

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Reply via email to