On 09/19/2017 04:54 PM, Dmitry Vyukov wrote: > On Tue, Sep 19, 2017 at 3:52 PM, Andrey Ryabinin > <aryabi...@virtuozzo.com> wrote: >> >> >> On 09/19/2017 03:57 PM, Dmitry Vyukov wrote: >>> On Tue, Sep 19, 2017 at 2:46 PM, Andrey Ryabinin >>> <aryabi...@virtuozzo.com> wrote: >>>> As comment says barriers needed for preempt_schedule_irq() case >>>> where in_interrupt() returns false. But we don't use in_interrupt() >>>> since b274c0bb394c ("kcov: properly check if we are in an interrupt"). >>>> >>>> Now we use in_task() which handles preempt_schedule_irq() case properly, >>>> thus no barrier required. >>> >>> >>> Are you sure in_task() handles preempt_schedule_irq() correctly? >>> They seem to differ only by SOFTIRQ_MASK vs SOFTIRQ_OFFSET, and that >>> only differs in local_bh_disable sections. But preempt_schedule_irq() >>> does not seem to have anything to do softirq/local_bh_disable. It's >>> called from real interrupts, right? So I would expect that in_task() >>> returns true in preempt_schedule_irq(). >> >> Indeed, you're right. I checked this only on !PREEMPT kernel, where this >> worked. >> >> Still, I think that barrier() in __sanitizer_cov_trace_pc() is not needed. >> AFAIU it needed >> to make sure that load of t->kcov_area isn't moved before load of >> t->kcov_mode, but I don't >> think that compiler is allowed to make such reorder. That would be a bug in >> the compiler. >
Ugh, it should have saied READ_ONCE(area[0]) instead of t->kcov_area. > > Why? C compiler is allowed to fuse/reorder loads from the same base > object. Also stores can be reordered. > Ok, right. t->kcov_area can be loaded before t->kcov_mode, and it's fine. But deference of the kcov_area (READ_ONCE(area[0])) can't be moved before kcov_mode check. And this barrier intended to prevent such move, right? > >>>> Signed-off-by: Andrey Ryabinin <aryabi...@virtuozzo.com> >>>> --- >>>> kernel/kcov.c | 10 ---------- >>>> 1 file changed, 10 deletions(-) >>>> >>>> diff --git a/kernel/kcov.c b/kernel/kcov.c >>>> index 14cc8c1a7cad..b7fbcbef88c1 100644 >>>> --- a/kernel/kcov.c >>>> +++ b/kernel/kcov.c >>>> @@ -71,14 +71,6 @@ void notrace __sanitizer_cov_trace_pc(void) >>>> >>>> ip -= kaslr_offset(); >>>> >>>> - /* >>>> - * There is some code that runs in interrupts but for which >>>> - * in_interrupt() returns false (e.g. >>>> preempt_schedule_irq()). >>>> - * READ_ONCE()/barrier() effectively provides load-acquire >>>> wrt >>>> - * interrupts, there are paired barrier()/WRITE_ONCE() in >>>> - * kcov_ioctl_locked(). >>>> - */ >>>> - barrier(); >>>> area = t->kcov_area; >>>> /* The first word is number of subsequent PCs. */ >>>> pos = READ_ONCE(area[0]) + 1; >>>> @@ -228,8 +220,6 @@ static int kcov_ioctl_locked(struct kcov *kcov, >>>> unsigned int cmd, >>>> /* Cache in task struct for performance. */ >>>> t->kcov_size = kcov->size; >>>> t->kcov_area = kcov->area; >>>> - /* See comment in __sanitizer_cov_trace_pc(). */ >>>> - barrier(); >>>> WRITE_ONCE(t->kcov_mode, kcov->mode); >>>> t->kcov = kcov; >>>> kcov->t = t; >>>> -- >>>> 2.13.5 >>>> >>