Hi Peter, Ahmed,

On Wed, Jul 15, 2020 at 01:59:01PM +0200, Ahmed S. Darwish wrote:
> sched_clock uses seqcount_t latching to switch between two storage
> places protected by the sequence counter. This allows it to have
> interruptible, NMI-safe, seqcount_t write side critical sections.
> 
> Since 7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()"),
> raw_read_seqcount_latch() became the standardized way for seqcount_t
> latch read paths. Due to the dependent load, it also has one read
> memory barrier less than the currently used raw_read_seqcount() API.
> 
> Use raw_read_seqcount_latch() for the seqcount_t latch read path.
> 
> Link: 
> https://lkml.kernel.org/r/20200625085745.gd117...@hirez.programming.kicks-ass.net
> Link: 
> https://lkml.kernel.org/r/20200715092345.ga231...@debian-buster-darwi.lab.linutronix.de
> References: 1809bfa44e10 ("timers, sched/clock: Avoid deadlock during read 
> from NMI")
> Signed-off-by: Ahmed S. Darwish <a.darw...@linutronix.de>
> ---
>  kernel/time/sched_clock.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
> index fa3f800d7d76..ea007928d681 100644
> --- a/kernel/time/sched_clock.c
> +++ b/kernel/time/sched_clock.c
> @@ -100,7 +100,7 @@ unsigned long long notrace sched_clock(void)
>       struct clock_read_data *rd;
> 
>       do {
> -             seq = raw_read_seqcount(&cd.seq);
> +             seq = raw_read_seqcount_latch(&cd.seq);

Understand this is doing the same thing with __ktime_get_fast_ns() and
I saw Peter acked to make change for this.

Just want to confirm, since this patch introduces conflict with the
patch set "arm64: perf: Proper cap_user_time* support" [1], I should
rebase the patch set on top of this patch, right?

Thanks,
Leo

[1] https://patchwork.kernel.org/cover/11664031/

>               rd = cd.read_data + (seq & 1);
> 
>               cyc = (rd->read_sched_clock() - rd->epoch_cyc) &
> --
> 2.20.1

Reply via email to