In __tracing_open(), when max latency tracers took place on the cpu,
the time start of its buffer would be updated, then event entries with
timestamps being earlier than start of the buffer would be skipped
(see tracing_iter_reset()).

Softlockup will occur if the kernel is non-preemptible and too many
entries were skipped in the loop that reset every cpu buffer, so add
cond_resched() to avoid it.

Signed-off-by: Zheng Yejian <zhengyej...@huaweicloud.com>
---
 kernel/trace/trace.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ebe7ce2f5f4a..88faa95b457b 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4706,6 +4706,15 @@ __tracing_open(struct inode *inode, struct file *file, 
bool snapshot)
                for_each_tracing_cpu(cpu) {
                        ring_buffer_read_start(iter->buffer_iter[cpu]);
                        tracing_iter_reset(iter, cpu);
+                       /*
+                        * When max latency tracers took place on the cpu, the 
time start
+                        * of its buffer would be updated, then event entries 
with timestamps
+                        * being earlier than start of the buffer would be 
skipped
+                        * (see tracing_iter_reset()). Softlockup will occur if 
the kernel
+                        * is non-preemptible and too many entries were skipped 
in the loop,
+                        * so add cond_resched() to mitigate it.
+                        */
+                       cond_resched();
                }
        } else {
                cpu = iter->cpu_file;
-- 
2.25.1


Reply via email to