From: "Steven Rostedt (Google)"
As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/ring_buffer.c | 6 ++
1 file
the implemantion is.
- I rebased on top of trace/core in the:
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
- I made the tests a bit more advanced. Still a smoke test, but it
now checks if the string written is the same as the string read.
Steven Rostedt (Google)
race-devel/20211213094825.61876-5-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/ring_buffer.c | 80 ++
1 file changed, 73 insertions(+), 7 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/ke
From: "Steven Rostedt (Google)"
Add a self test that will write into the trace buffer with differ trace
sub buffer order sizes.
Signed-off-by: Steven Rostedt (Google)
---
.../ftrace/test.d/00basic/ringbuffer_order.tc | 95 +++
1 file changed, 95 insertions(+)
c
_read_page()
ring_buffer_read_page()
A new API is introduced:
ring_buffer_read_page_data()
Link:
https://lore.kernel.org/linux-trace-devel/20211213094825.61876-6-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
include/linux/rin
From: "Steven Rostedt (Google)"
Using page order for deciding what the size of the ring buffer sub buffers
are is exposing a bit too much of the implementation. Although the sub
buffers are only allocated in orders of pages, allow the user to specify
the minimum size of each sub-
From: "Steven Rostedt (Google)"
When updating the order of the sub buffers for the main buffer, make sure
that if the snapshot buffer exists, that it gets its order updated as
well.
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/tr
mir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
include/linux/ring_buffer.h | 4 ++
kernel/trace/ring_buffer.c | 73 +
kernel/trace/trace.c| 48
3 files changed, 125 insertions(+)
diff --git a/include/linux/ring_buff
From: "Steven Rostedt (Google)"
Now that the ring buffer specifies the size of its sub buffers, they all
need to be the same size. When doing a read, a swap is done with a spare
page. Make sure they are the same size before doing the swap, otherwise
the read will fail.
Signed-off-
From: "Steven Rostedt (Google)"
Because the main buffer and the snapshot buffer need to be the same for
some tracers, otherwise it will fail and disable all tracing, the tracers
need to be stopped while updating the sub buffer sizes so that the tracers
see the main and snapsh
From: "Steven Rostedt (Google)"
The ring_buffer_subbuf_order_set() was creating ring_buffer_per_cpu
cpu_buffers with the new subbuffers with the updated order, and if they
all successfully were created, then they the ring_buffer's per_cpu buffers
would be freed and replaced by them.
T
From: "Steven Rostedt (Google)"
The function ring_buffer_subbuf_order_set() just updated the sub-buffers
to the new size, but this also changes the size of the buffer in doing so.
As the size is determined by nr_pages * subbuf_size. If the subbuf_size is
increased without decreasing th
fer.
Link:
https://lore.kernel.org/linux-trace-devel/20211213094825.61876-3-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
include/linux/ring_buffer.h | 2 +-
kernel/trace/ring_buffer.c | 68 +--
From: "Steven Rostedt (Google)"
Add to the documentation how to use the buffer_subbuf_order file to change
the size and how it affects what events can be added to the ring buffer.
Signed-off-by: Steven Rostedt (Google)
---
Documentation/trace/ftrace.rst | 27
From: "Steven Rostedt (Google)"
As all the subbuffer order (subbuffer sizes) must be the same throughout
the ring buffer, check the order of the buffers that are doing a CPU
buffer swap in ring_buffer_swap_cpu() to make sure they are the same.
If the are not the same, then fail to d
From: "Steven Rostedt (Google)"
On failure to allocate ring buffer pages, the pointer to the CPU buffer
pages is freed, but the pages that were allocated previously were not.
Make sure they are freed too.
Fixes: TBD ("tracing: Set new size of the ring buffer sub page")
S
inux-trace-devel/20211213094825.61876-2-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/ring_buffer.c | 60 +++---
1 file changed, 30 insertions(+), 30 deletions(-)
diff --git a/ke
From: "Steven Rostedt (Google)"
There's no reason to give an arbitrary limit to the size of a raw trace
marker. Just let it be as big as the size that is allowed by the ring
buffer itself.
And there's also no reason to artificially break up the write to
TRACE_BUF_SIZE, as that's not
From: "Steven Rostedt (Google)"
A trace instance may only need to enable specific events. As the eventfs
directory of an instance currently creates all events which adds overhead,
allow internal instances to be created with just the events in systems
that they care about. This curr
On Wed, 13 Dec 2023 09:51:38 +0800
Zheng Yejian wrote:
> ---
> kernel/trace/trace_events_hist.c | 18 ++
> 1 file changed, 14 insertions(+), 4 deletions(-)
>
> Steve, thanks for your review!
>
> v2:
> - Introduce tracing_single_release_file_tr() to add the missing call for
>
From: "Steven Rostedt (Google)"
If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:
trace_seq_printf(s, ": %s", field->buf);
The field->buf could be missing the nul byte. To prevent overflow, add the
On Tue, 12 Dec 2023 08:44:44 -0500
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> If for some reason the trace_marker write does not have a nul byte for the
> string, it will overflow the print:
>
> trace_seq_printf(s, ": %s", field->b
On Wed, 13 Dec 2023 09:19:33 +0900
Masami Hiramatsu (Google) wrote:
> On Tue, 12 Dec 2023 19:04:22 -0500
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > If a trace_marker write is bigger than what trace_seq can hold, then it
From: "Steven Rostedt (Google)"
If a trace_marker write is bigger than what trace_seq can hold, then it
will print "LINE TOO BIG" message and not what was written.
Instead, if check if the write is bigger than the trace_seq and break it
up by that size.
Ideally, we coul
From: "Steven Rostedt (Google)"
Allow a trace write to be as big as the ring buffer tracing data will
allow. Currently, it only allows writes of 1KB in size, but there's no
reason that it cannot allow what the ring buffer can hold.
Signed-off-by: Steven Rostedt (Google)
---
Change
On Tue, 12 Dec 2023 11:49:20 -0500
Mathieu Desnoyers wrote:
> >> So the old "bottom" value is returned, which is wrong.
> >
> > Ah, OK that makes more sense. Yeah, if I had the three words from the
> > beginning, I would have tested to make sure they all match an not just the
> > two :-p
>
From: "Steven Rostedt (Google)"
Mathieu Desnoyers pointed out an issue in the rb_time_cmpxchg() for 32 bit
architectures. That is:
static bool rb_time_cmpxchg(rb_time_t *t, u64 expect, u64 set)
{
unsigned long cnt, top, bottom, msb;
unsigned long cnt2, top2, bot
On Tue, 12 Dec 2023 11:03:32 -0500
Steven Rostedt wrote:
> @@ -7300,9 +7301,25 @@ tracing_mark_write(struct file *filp, const char
> __user *ubuf,
> buffer = tr->array_buffer.buffer;
> event = __trace_buffer_lock_reserve(buffer, TR
On Tue, 12 Dec 2023 19:33:17 +0800
Zheng Yejian wrote:
> diff --git a/kernel/trace/trace_events_hist.c
> b/kernel/trace/trace_events_hist.c
> index 1abc07fba1b9..00447ea7dabd 100644
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -5623,10 +5623,12 @@
From: "Steven Rostedt (Google)"
The maximum ring buffer data size is the maximum size of data that can be
recorded on the ring buffer. Events must be smaller than the sub buffer
data size minus any meta data. This size is checked before trying to
allocate from the ring buff
From: "Steven Rostedt (Google)"
Allow a trace write to be as big as the ring buffer tracing data will
allow. Currently, it only allows writes of 1KB in size, but there's no
reason that it cannot allow what the ring buffer can hold.
Signed-off-by: Steven Rostedt (Google)
---
Change
On Tue, 12 Dec 2023 09:33:11 -0500
Mathieu Desnoyers wrote:
> On 2023-12-12 09:00, Steven Rostedt wrote:
> [...]
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -7272,6 +7272,7 @@ tracing_mark_write(struct file *filp, const char
> &
On Tue, 12 Dec 2023 23:20:08 +0900
Masami Hiramatsu (Google) wrote:
> On Tue, 12 Dec 2023 07:18:37 -0500
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > On 32bit machines, the 64 bit timestamps are broken up into 32 bit words
&
On Tue, 12 Dec 2023 09:23:54 -0500
Mathieu Desnoyers wrote:
> On 2023-12-12 08:44, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > If for some reason the trace_marker write does not have a nul byte for the
> > string, it will overfl
From: "Steven Rostedt (Google)"
Allow a trace write to be as big as the ring buffer tracing data will
allow. Currently, it only allows writes of 1KB in size, but there's no
reason that it cannot allow what the ring buffer can hold.
Cc: Masami Hiramatsu
Cc: Mark Rutland
Cc: Mathieu
From: "Steven Rostedt (Google)"
If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:
trace_seq_printf(s, ": %s", field->buf);
The field->buf could be missing the nul byte. To prevent overflow, add the
From: "Steven Rostedt (Google)"
For the ring buffer iterator (non-consuming read), the event needs to be
copied into the iterator buffer to make sure that a writer does not
overwrite it while the user is reading it. If a write happens during the
copy, the buffer is simply
From: "Steven Rostedt (Google)"
On 32bit machines, the 64 bit timestamps are broken up into 32 bit words
to keep from using local64_cmpxchg(), as that is very expensive on 32 bit
architectures.
On 32 bit architectures, reading these timestamps can happen in a middle
of an update. In
From: "Steven Rostedt (Google)"
The maximum ring buffer data size is the maximum size of data that can be
recorded on the ring buffer. Events must be smaller than the sub buffer
data size minus any meta data. This size is checked before trying to
allocate from the ring buff
On Mon, 11 Dec 2023 20:40:33 +0900
Masami Hiramatsu (Google) wrote:
> On Sat, 9 Dec 2023 17:09:25 -0500
> Steven Rostedt wrote:
>
> > On Sat, 9 Dec 2023 17:01:39 -0500
> > Steven Rostedt wrote:
> >
> > > From: "Steven Rostedt (Google)"
On Mon, 11 Dec 2023 22:51:04 -0500
Mathieu Desnoyers wrote:
> On 2023-12-11 17:59, Steven Rostedt wrote:
> > On Mon, 11 Dec 2023 15:13:24 -0500
> > Mathieu Desnoyers wrote:
> >
> >> Going through a review of the ring buffer rb_time functions for 32-bit
On Tue, 12 Dec 2023 09:31:31 +0900
Masami Hiramatsu (Google) wrote:
> On Mon, 11 Dec 2023 11:59:49 -0500
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > On 32bit machines, the 64 bit timestamps are broken up into 32 bit words
&
On Mon, 11 Dec 2023 17:59:04 -0500
Steven Rostedt wrote:
> >
> > - A cmpxchg interrupted by 4 writes or cmpxchg overflows the counter
> > and produces corrupted time stamps. This is _not_ fixed by this patch.
>
> Except that it's not 4 bits that is compared, b
that it was interrupted between the succeeding part and the failing part.
And the interruption would have written to the value making it valid again.
>
> Signed-off-by: Mathieu Desnoyers
> Cc: Steven Rostedt
> Cc: Masami Hiramatsu
> Cc: linux-trace-ker...@vger.kernel.org
> ---
On Mon, 11 Dec 2023 21:46:27 +0900
Masami Hiramatsu (Google) wrote:
> >
> > By increasing the trace_seq buffer to almost two pages, it can now print
> > out the first line.
> >
> > This also subtracts the rest of the trace_seq fields from the buffer, so
> > that the entire trace_seq is now
From: "Steven Rostedt (Google)"
On bugs that have the ring buffer timestamp get out of sync, the config
CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS, that checks for it and if it is
detected it causes a dump of the bad sub buffer.
It shows each event and their timestamp as well as
On Mon, 11 Dec 2023 13:06:14 -0500
Steven Rostedt wrote:
>
> case RINGBUF_TYPE_DATA:
> ts += event->time_delta;
> - pr_warn(" [%lld] delta:%d\n", ts, event->time_delta);
> + pr_
From: "Steven Rostedt (Google)"
On bugs that have the ring buffer timestamp get out of sync, the config
CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS, that checks for it and if it is
detected it causes a dump of the bad sub buffer.
It shows each event and their timestamp as well as
On Mon, 11 Dec 2023 21:31:34 +0900
Masami Hiramatsu (Google) wrote:
> On Sun, 10 Dec 2023 22:54:47 -0500
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > The snapshot buffer is to mimic the main buffer so that when a snapshot is
>
From: "Steven Rostedt (Google)"
On 32bit machines, the 64 bit timestamps are broken up into 32 bit words
to keep from using local64_cmpxchg(), as that is very expensive on 32 bit
architectures.
On 32 bit architectures, reading these timestamps can happen in a middle
of an update. In
From: "Steven Rostedt (Google)"
The ring buffer timestamps are synchronized by two timestamp placeholders.
One is the "before_stamp" and the other is the "write_stamp" (sometimes
referred to as the "after stamp" but only in the comments. These two
stamps
From: "Steven Rostedt (Google)"
The snapshot buffer is to mimic the main buffer so that when a snapshot is
needed, the snapshot and main buffer are swapped. When the snapshot buffer
is allocated, it is set to the minimal size that the ring buffer may be at
and still functi
From: "Steven Rostedt (Google)"
Reading the ring buffer does a swap of a sub-buffer within the ring buffer
with a empty sub-buffer. This allows the reader to have full access to the
content of the sub-buffer that was swapped out without having to worry
about contention with
On Sun, 10 Dec 2023 12:28:32 -0500
Mathieu Desnoyers wrote:
> > Again, it's not a requirement, it's just an enhancement.
>
> How does this have anything to do with dispensing from testing the
> new behavior ? If the new behavior has a bug that causes it to
> silently truncate the trace marker
On Sun, 10 Dec 2023 11:07:22 -0500
Mathieu Desnoyers wrote:
> > It just allows more to be written in one go.
> >
> > I don't see why the tests need to cover this or detect this change.
>
> If the purpose of this change is to ensure that the entire
> trace marker payload is shown within a
On Sun, 10 Dec 2023 09:26:13 -0500
Mathieu Desnoyers wrote:
> This test has no clue if the record was truncated or not.
>
> It basically repeats the string
>
> "1234567890" until it fills the subbuffer size and pads with
> as needed as trace marker payload, but the grep looks for the
>
On Sun, 10 Dec 2023 09:17:44 -0500
Mathieu Desnoyers wrote:
> On 2023-12-09 22:54, Steven Rostedt wrote:
> [...]
> >
> > Basically, events to the tracing subsystem are limited to just under a
> > PAGE_SIZE, as the ring buffer is split into "sub buffers" of o
On Sun, 10 Dec 2023 09:11:40 -0500
Mathieu Desnoyers wrote:
> On 2023-12-09 17:10, Steven Rostedt wrote:
> [...]
> > <...>-852 [001] . 121.550551:
> > tracing_mark_write[LINE TOO BIG]
> > <...>-852 [001] ..
On Sun, 10 Dec 2023 09:09:06 -0500
Mathieu Desnoyers wrote:
> On 2023-12-09 17:50, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > Allow a trace write to be as big as the ring buffer tracing data will
> > allow. Currently, it only allo
From: "Steven Rostedt (Google)"
Because the main buffer and the snapshot buffer need to be the same for
some tracers, otherwise it will fail and disable all tracing, the tracers
need to be stopped while updating the sub buffer sizes so that the tracers
see the main and snapsh
_read_page()
ring_buffer_read_page()
A new API is introduced:
ring_buffer_read_page_data()
Link:
https://lore.kernel.org/linux-trace-devel/20211213094825.61876-6-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
include/linux/rin
From: "Steven Rostedt (Google)"
When updating the order of the sub buffers for the main buffer, make sure
that if the snapshot buffer exists, that it gets its order updated as
well.
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/tr
mir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
include/linux/ring_buffer.h | 4 ++
kernel/trace/ring_buffer.c | 73 +
kernel/trace/trace.c| 48
3 files changed, 125 insertions(+)
diff --git a/include/linux/ring_buff
From: "Steven Rostedt (Google)"
The ring_buffer_subbuf_order_set() was creating ring_buffer_per_cpu
cpu_buffers with the new subbuffers with the updated order, and if they
all successfully were created, then they the ring_buffer's per_cpu buffers
would be freed and replaced by them.
T
From: "Steven Rostedt (Google)"
Now that the ring buffer specifies the size of its sub buffers, they all
need to be the same size. When doing a read, a swap is done with a spare
page. Make sure they are the same size before doing the swap, otherwise
the read will fail.
Signed-off-
fer.
Link:
https://lore.kernel.org/linux-trace-devel/20211213094825.61876-3-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
include/linux/ring_buffer.h | 2 +-
kernel/trace/ring_buffer.c | 65 +++
From: "Steven Rostedt (Google)"
Add a self test that will write into the trace buffer with differ trace
sub buffer order sizes.
Signed-off-by: Steven Rostedt (Google)
---
.../ftrace/test.d/00basic/ringbuffer_order.tc | 46 +++
1 file changed, 46 insertions(+)
c
ch sub buffer a size of 8 pages, allowing events to be almost
as big as 8 pages in size (sub buffers do have meta data on them as
well, keeping an event from reaching the same size as a sub buffer).
Steven Rostedt (Google) (9):
ring-buffer: Clear pages on error in ring_buffer_subbuf_order_s
From: "Steven Rostedt (Google)"
As all the subbuffer order (subbuffer sizes) must be the same throughout
the ring buffer, check the order of the buffers that are doing a CPU
buffer swap in ring_buffer_swap_cpu() to make sure they are the same.
If the are not the same, then fail to d
inux-trace-devel/20211213094825.61876-2-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/ring_buffer.c | 60 +++---
1 file changed, 30 insertions(+), 30 deletions(-)
diff --git a/ke
From: "Steven Rostedt (Google)"
The function ring_buffer_subbuf_order_set() just updated the sub-buffers
to the new size, but this also changes the size of the buffer in doing so.
As the size is determined by nr_pages * subbuf_size. If the subbuf_size is
increased without decreasing th
From: "Steven Rostedt (Google)"
Add to the documentation how to use the buffer_subbuf_order file to change
the size and how it affects what events can be added to the ring buffer.
Signed-off-by: Steven Rostedt (Google)
---
Documentation/trace/ftrace.rst | 27
From: "Steven Rostedt (Google)"
On failure to allocate ring buffer pages, the pointer to the CPU buffer
pages is freed, but the pages that were allocated previously were not.
Make sure they are freed too.
Fixes: TBD ("tracing: Set new size of the ring buffer sub page")
S
race-devel/20211213094825.61876-5-tz.stoya...@gmail.com
Signed-off-by: Tzvetomir Stoyanov (VMware)
Signed-off-by: Steven Rostedt (Google)
---
kernel/trace/ring_buffer.c | 80 ++
1 file changed, 73 insertions(+), 7 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/ke
From: "Steven Rostedt (Google)"
There's no reason to give an arbitrary limit to the size of a raw trace
marker. Just let it be as big as the size that is allowed by the ring
buffer itself.
And there's also no reason to artificially break up the write to
TRACE_BUF_SIZE, as that's not
From: "Steven Rostedt (Google)"
Now that trace_marker can hold more than 1KB string, and can write as much
as the ring buffer can hold, the trace_seq is not big enough to hold
writes:
~# a="1234567890"
~# cnt=4080
~# s=""
~# while [ $cnt -gt 10 ]; do
~# s
From: "Steven Rostedt (Google)"
Allow a trace write to be as big as the ring buffer tracing data will
allow. Currently, it only allows writes of 1KB in size, but there's no
reason that it cannot allow what the ring buffer can hold.
Signed-off-by: Steven Rostedt (Google)
---
[
From: "Steven Rostedt (Google)"
If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:
~# cat /sys/kernel/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written:
On Sat, 9 Dec 2023 17:01:39 -0500
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> The maximum ring buffer data size is the maximum size of data that can be
> recorded on the ring buffer. Events must be smaller than the sub buffer
> data size minus
From: "Steven Rostedt (Google)"
The maximum ring buffer data size is the maximum size of data that can be
recorded on the ring buffer. Events must be smaller than the sub buffer
data size minus any meta data. This size is checked before trying to
allocate from the ring buff
On Fri, 8 Dec 2023 18:36:01 +
Beau Belgrave wrote:
> While developing some unrelated features I happened to create a
> trace_event that was more than NAME_MAX (255) characters. When this
> happened the creation worked, but tracefs would hang any task that tried
> to list the directory of the
On Fri, 8 Dec 2023 15:16:10 +0100
Alexander Potapenko wrote:
> On Tue, Nov 21, 2023 at 11:02 PM Ilya Leoshkevich wrote:
> >
> > Architectures use assembly code to initialize ftrace_regs and call
> > ftrace_ops_list_func(). Therefore, from the KMSAN's point of view,
> > ftrace_regs is poisoned
From: "Steven Rostedt (Google)"
On bugs that have the ring buffer timestamp get out of sync, the config
CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS, that checks for it and if it is
detected it causes a dump of the bad sub buffer.
It shows each event and their timestamp as well as
On Thu, 7 Dec 2023 17:19:24 -0500
Mathieu Desnoyers wrote:
> On 2023-12-07 17:16, Steven Rostedt wrote:
>
> [...]
>
> > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> > index 8d2a4f00eca9..b10deb8a5647 100644
> > --- a/kernel/trace/ring_b
From: "Steven Rostedt (Google)"
On bugs that have the ring buffer timestamp get out of sync, the config
CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS, that checks for it and if it is
detected it causes a dump of the bad sub buffer.
It shows each event and their timestamp as well as
On Wed, 6 Dec 2023 21:12:57 +0530
Krishna chaitanya chundru wrote:
> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> index f78aefd2d7a3..6acb85f4c5f8 100644
> --- a/drivers/bus/mhi/host/init.c
> +++ b/drivers/bus/mhi/host/init.c
> @@ -20,6 +20,9 @@
> #include
>
From: "Steven Rostedt (Google)"
There's a race where if an event is discarded from the ring buffer and an
interrupt were to happen at that time and insert an event, the time stamp
is still used from the discarded event as an offset. This can screw up the
timings.
If the even
From: "Steven Rostedt (Google)"
Since 64 bit cmpxchg() is very expensive on 32bit architectures, the
timestamp used by the ring buffer does some interesting tricks to be able
to still have an atomic 64 bit number. It originally just used 60 bits and
broke it up into two 32 bit w
From: "Steven Rostedt (Google)"
A trace instance may only need to enable specific events. As the eventfs
directory of an instance currently creates all events which adds overhead,
allow internal instances to be created with just the events in systems
that they care about. This curr
From: "Steven Rostedt (Google)"
When the ring buffer is being resized, it can cause side effects to the
running tracer. For instance, there's a race with irqsoff tracer that
swaps individual per cpu buffers between the main buffer and the snapshot
buffer. The resize operation modifie
From: "Steven Rostedt (Google)"
It use to be that only the top level instance had a snapshot buffer (for
latency tracers like wakeup and irqsoff). The update of the ring buffer
size would check if the instance was the top level and if so, it would
also update the snapshot buffer a
From: "Steven Rostedt (Google)"
It use to be that only the top level instance had a snapshot buffer (for
latency tracers like wakeup and irqsoff). When stopping a tracer in an
instance would not disable the snapshot buffer. This could have some
unintended consequences if the irqs
with the change log of patch 1.
That patch just needs to be ignored.
Steven Rostedt (Google) (3):
tracing: Always update snapshot buffer size
tracing: Stop current tracer when resizing buffer
tracing: Disable snapshot buffer when stopping instance tracers
kernel/trace
On Tue, 5 Dec 2023 20:39:28 +0100
Eric Dumazet wrote:
> > So, we do not want to add some tracepoint to do some unknow debug.
> > We have a clear goal. debugging is just an incidental capability.
> >
>
> We have powerful mechanisms in the stack already that ordinary (no
> privilege requested)
On Tue, 5 Dec 2023 19:13:09 +0100
Dmytro Maluka wrote:
> On Tue, Nov 28, 2023 at 12:21:17PM -0500, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > A trace instance may only need to enable specific events. As the eventfs
> > directory
On Tue, 5 Dec 2023 11:52:23 -0500
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> It use to be that only the top level instance had a snapshot buffer (for
> latency tracers like wakeup and irqsoff). The update of the ring buffer
> size would check if the i
From: "Steven Rostedt (Google)"
It use to be that only the top level instance had a snapshot buffer (for
latency tracers like wakeup and irqsoff). The update of the ring buffer
size would check if the instance was the top level and if so, it would
also update the snapshot buffer a
On Sun, 3 Dec 2023 10:33:32 +0900
Dominique Martinet wrote:
> > TP_printk("clnt %lu %s(tag = %d)\n%.3x: %16ph\n%.3x: %16ph\n",
> > (unsigned long)__entry->clnt,
> > show_9p_op(__entry->type),
> > __entry->tag, 0,
On Sat, 02 Dec 2023 14:05:24 +0100
Christian Schoenebeck wrote:
> > > --- a/include/trace/events/9p.h
> > > +++ b/include/trace/events/9p.h
> > > @@ -185,7 +185,8 @@ TRACE_EVENT(9p_protocol_dump,
> > > __entry->clnt = clnt;
> > > __entry->type = pdu->id;
> > >
On Fri, 1 Dec 2023 09:25:59 -0800
Justin Chen wrote:
> > It appears the sub instruction at 0x6dd0 correctly accounts for the
> > extra 8 bytes, so the frame pointer is valid. So it is our assumption
> > that there are no gaps between the stack frames is invalid.
>
> Thanks for the assistance.
601 - 700 of 34135 matches
Mail list logo