On Thu, 14 Mar 2024 09:57:57 -0700
Alison Schofield wrote:
> On Fri, Feb 23, 2024 at 12:56:34PM -0500, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > [
> >This is a treewide change. I will likely re-create this patch again in
>
On Thu, 14 Mar 2024 15:39:28 +0100
Paolo Abeni wrote:
> On Wed, 2024-03-13 at 09:34 -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > [
> >Note, I need to take this patch through my tree, so I'm looking for
> > acks.
&g
On Wed, 13 Mar 2024 13:45:50 -0400
Steven Rostedt wrote:
> Let me test to make sure that when src is a string "like this" that it does
> the strcmp(). Otherwise, we may have to always do the strcmp(), which I
> really would like to avoid.
I added the below patch and e
On Wed, 13 Mar 2024 09:59:03 -0700
Nathan Chancellor wrote:
> > Reported-by: kernel test robot
> > Closes:
> > https://lore.kernel.org/oe-kbuild-all/202402292111.kidexylu-...@intel.com/
> > Fixes: 433e1d88a3be ("tracing: Add warning if string in __assign_str() does
> > not match __string()")
From: "Steven Rostedt (Google)"
[
Note, I need to take this patch through my tree, so I'm looking for acks.
This causes the build to fail when I add the __assign_str() check, which
I was about to push to Linus, but it breaks allmodconfig due to this error.
]
Th
From: "Steven Rostedt (Google)"
While testing libtracefs on the mmapped ring buffer, the test that checks
if missed events are accounted for failed when using the mapped buffer.
This is because the mapped page does not update the missed events that
were dropped because the writer
From: "Steven Rostedt (Google)"
The rb_watermark_hit() checks if the amount of data in the ring buffer is
above the percentage level passed in by the "full" variable. If it is, it
returns true.
But it also sets the "shortest_full" field of the cpu_buffer that
On Wed, 13 Mar 2024 00:38:42 +0900
Masami Hiramatsu (Google) wrote:
> On Tue, 12 Mar 2024 09:19:21 -0400
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > The check for knowing if the poll should wait or not is basically the
>
On Wed, 13 Mar 2024 00:22:10 +0900
Masami Hiramatsu (Google) wrote:
> On Tue, 12 Mar 2024 09:19:20 -0400
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > If a reader of the ring buffer is doing a poll, and waiting for the ring
&
From: "Steven Rostedt (Google)"
The WARN_ON() check in __assign_str() to catch where the source variable
to the macro doesn't match the source variable to __string() gives an
error in clang:
>> include/trace/events/sunrpc.h:703:4: warning: result of comparison against a
&
From: "Steven Rostedt (Google)"
If a reader of the ring buffer is doing a poll, and waiting for the ring
buffer to hit a specific watermark, there could be a case where it gets
into an infinite ping-pong loop.
The poll code has:
rbwork->full_waiters_pending = true;
if
nd !full wakeups. But since poll uses the same logic for
full wakeups it can just call that function with full set.
Changes since v1:
https://lore.kernel.org/all/20240312115455.666920...@goodmis.org/
- Removed unused 'flags' in ring_buffer_poll_wait() as the spin_lock
is now in rb_watermark_hit().
Steve
From: "Steven Rostedt (Google)"
The check for knowing if the poll should wait or not is basically the
exact same logic as rb_watermark_hit(). The only difference is that
rb_watermark_hit() also handles the !full case. But for the full case, the
logic is the same. Just call th
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
tps://lore.kernel.org/lkml/20240308183816.676883...@goodmis.org/
- My tests triggered a warning about calling a mutex_lock() after a
prepare_to_wait() that changed the task's state. Convert the affected
mutex over to a spinlock.
Steven Rostedt (Google) (2):
ring-buffer: Use wait_even
From: "Steven Rostedt (Google)"
Convert ring_buffer_wait() over to wait_event_interruptible(). The default
condition is to execute the wait loop inside __wait_event() just once.
This does not change the ring_buffer_wait() prototype yet, but
restructures the code so that it can ta
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
mutex_lock() after a
prepare_to_wait() that changed the task's state. Convert the affected
mutex over to a spinlock.
Steven Rostedt (Google) (2):
ring-buffer: Use wait_event_interruptible() in ring_buffer_wait()
tracing/ring-buffer: Fix wait_on_pipe() race
include/linux/
From: "Steven Rostedt (Google)"
Convert ring_buffer_wait() over to wait_event_interruptible(). The default
condition is to execute the wait loop inside __wait_event() just once.
This does not change the ring_buffer_wait() prototype yet, but
restructures the code so that it can ta
From: "Steven Rostedt (Google)"
The check for knowing if the poll should wait or not is basically the
exact same logic as rb_watermark_hit(). The only difference is that
rb_watermark_hit() also handles the !full case. But for the full case, the
logic is the same. Just call th
From: "Steven Rostedt (Google)"
If a reader of the ring buffer is doing a poll, and waiting for the ring
buffer to hit a specific watermark, there could be a case where it gets
into an infinite ping-pong loop.
The poll code has:
rbwork->full_waiters_pending = true;
if
nd !full wakeups. But since poll uses the same logic for
full wakeups it can just call that function with full set.
Steven Rostedt (Google) (2):
ring-buffer: Fix full_waiters_pending in poll
ring-buffer: Reuse rb_watermark_hit() for the poll logic
kernel/trace/ring_buffer.c | 30 +++---
1 file changed, 19 insertions(+), 11 deletions(-)
On Fri, 8 Mar 2024 13:41:59 -0800
Linus Torvalds wrote:
> On Fri, 8 Mar 2024 at 13:39, Linus Torvalds
> wrote:
> >
> > So the above "complexity" is *literally* just changing the
> >
> > (new = atomic_read_acquire(>seq)) != old
> >
> > condition to
> >
> >
On Sat, 9 Mar 2024 10:27:47 -0800
Kees Cook wrote:
> On Tue, Mar 05, 2024 at 08:59:10PM -0500, Steven Rostedt wrote:
> > This is a way to map a ring buffer instance across reboots.
>
> As mentioned on Fedi, check out the persistent storage subsystem
> (pstore)[1]. It alread
On Fri, 8 Mar 2024 12:39:10 -0800
Linus Torvalds wrote:
> On Fri, 8 Mar 2024 at 10:38, Steven Rostedt wrote:
> >
> > A patch was sent to "fix" the wait_index variable that is used to help with
> > waking of waiters on the ring buffer. The patch was reje
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
From: "Steven Rostedt (Google)"
The ring_buffer_wait() needs to be broken into three functions for proper
synchronization from the context of the callers:
ring_buffer_prepare_to_wait()
ring_buffer_wait()
ring_buffer_finish_wait()
To simplify the process, pull out the logic f
From: "Steven Rostedt (Google)"
When the tracing_pipe_raw file is closed, if there are readers still
blocked on it, they need to be woken up. Currently a wait_index is used.
When the readers need to be woken, the index is updated and they are all
woken up.
But there is a race where a
From: "Steven Rostedt (Google)"
The .release() function does not get called until all readers of a file
descriptor are finished.
If a thread is blocked on reading a file descriptor in ring_buffer_wait(),
and another thread closes the file descriptor, it will not wake up the
ot
From: "Steven Rostedt (Google)"
The "shortest_full" variable is used to keep track of the waiter that is
waiting for the smallest amount on the ring buffer before being woken up.
When a tasks waits on the ring buffer, it passes in a "full" value that is
a percentag
From: "Steven Rostedt (Google)"
A task can wait on a ring buffer for when it fills up to a specific
watermark. The writer will check the minimum watermark that waiters are
waiting for and if the ring buffer is past that, it will wake up all the
waiters.
The waiters are in a
r a
prepare_to_wait() that changed the task's state. Convert the affected
mutex over to a spinlock.
Steven Rostedt (Google) (6):
ring-buffer: Fix waking up ring buffer readers
ring-buffer: Fix resetting of shortest_full
tracing: Use .flush() call to wake up readers
tra
On Fri, 08 Mar 2024 13:38:20 -0500
Steven Rostedt wrote:
> +static DEFINE_MUTEX(wait_mutex);
> +
> +static bool wait_woken_prepare(struct trace_iterator *iter, int *wait_index)
> +{
> + bool woken = false;
> +
> + mutex_lock(_mutex);
> + if (iter->waking)
&
From: "Steven Rostedt (Google)"
The ring_buffer_wait() needs to be broken into three functions for proper
synchronization from the context of the callers:
ring_buffer_prepare_to_wait()
ring_buffer_wait()
ring_buffer_finish_wait()
To simplify the process, pull out the logic f
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
From: "Steven Rostedt (Google)"
The .release() function does not get called until all readers of a file
descriptor are finished.
If a thread is blocked on reading a file descriptor in ring_buffer_wait(),
and another thread closes the file descriptor, it will not wake up the
ot
From: "Steven Rostedt (Google)"
The "shortest_full" variable is used to keep track of the waiter that is
waiting for the smallest amount on the ring buffer before being woken up.
When a tasks waits on the ring buffer, it passes in a "full" value that is
a percentag
From: "Steven Rostedt (Google)"
When the tracing_pipe_raw file is closed, if there are readers still
blocked on it, they need to be woken up. Currently a wait_index is used.
When the readers need to be woken, the index is updated and they are all
woken up.
But there is a race where a
From: "Steven Rostedt (Google)"
A task can wait on a ring buffer for when it fills up to a specific
watermark. The writer will check the minimum watermark that waiters are
waiting for and if the ring buffer is past that, it will wake up all the
waiters.
The waiters are in a
if its own condition has been set (in this case: iter->waking)
and then sleep. Follows the same semantics as any other wait logic.
Steven Rostedt (Google) (6):
ring-buffer: Fix waking up ring buffer readers
ring-buffer: Fix resetting of shortest_full
tracing: Use .flush()
> Signed-off-by: Kassey Li
> ---
> Changelog:
> v1:
> https://lore.kernel.org/all/20240308010929.1955339-1-quic_yinga...@quicinc.com/
> v1->v2:
> - do not follow checkpatch in TRACE_EVENT() macros
> - add sample "workqueue_activate_work: work struct ff80413a78b
On Fri, 8 Mar 2024 09:09:29 +0800
Kassey Li wrote:
> The trace event "workqueue_activate_work" only print work struct.
> However, function is the region of interest in a full sequence of work.
> Current workqueue_activate_work trace event output:
>
> workqueue_activate_work: work struct
On Wed, 6 Mar 2024 10:55:34 +0800
linke li wrote:
> Mark data races to work->wait_index as benign using READ_ONCE and WRITE_ONCE.
> These accesses are expected to be racy.
Are we now to the point that every single access of a variable (long size
or less) needs a READ_ONCE/WRITE_ONCE even with
I forgot to add [POC] to the topic.
All these patches are a proof of concept.
-- Steve
From: "Steven Rostedt (Google)"
Make sure all the events in each of the sub-buffers that were mapped in a
memory region are valid. This moves the code that walks the buffers for
time-stamp validation out of the CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS
ifdef block and is used t
From: "Steven Rostedt (Google)"
Add a test against the ring buffer memory range to see if it has valid
data. The ring_buffer_meta structure is given a new field called
"first_buffer" which holds the address of the first sub-buffer. This is
used to both determine if the ot
From: "Steven Rostedt (Google)"
Populate the ring_buffer_meta array. It holds the pointer to the
head_buffer (next to read), the commit_buffer (next to write) the size of
the sub-buffers, number of sub-buffers and an array that keeps track of
the order of the sub-buffers.
This i
From: "Steven Rostedt (Google)"
Add a buffer_meta per-cpu file for the trace instance that is mapped to
boot memory. This shows the current meta-data and can be used by user
space tools to record off the current mappings to help reconstruct the
ring buffer after a reboot.
It does not
From: "Steven Rostedt (Google)"
Add two global variables trace_buffer_start and trace_buffer_size. If they
are both set, then a "boot_mapped" instance will be created using the
memory specified by these variables as its ring buffer.
The instance will exist in:
/sys/kern
From: "Steven Rostedt (Google)"
Do not submit!
This is for testing purposes only. It hard codes an address that I was
using to store the ring buffer range. How the memory actually gets mapped
will be another project.
Signed-off-by: Steven Rostedt (Google)
---
arch/x86/kernel/se
From: "Steven Rostedt (Google)"
In preparation to allowing the trace ring buffer to be allocated in a
range of memory that is persistent across reboots, add
ring_buffer_alloc_range(). It takes a contiguous range of memory and will
split it up evening for the per CPU ring buffers.
trace
and it will have the trace.
I'm sure there's still some gotchas here, which is why this is currently
still just a POC.
Enjoy...
Steven Rostedt (Google) (8):
ring-buffer: Allow mapped field to be set without mapping
ring-buffer: Add ring_buffer_alloc_range()
traci
From: "Steven Rostedt (Google)"
In preparation for having the ring buffer mapped to a dedicated location,
which will have the same restrictions as user space memory mapped buffers,
allow it to use the "mapped" field of the ring_buffer_per_cpu structure
without having the
From: "Steven Rostedt (Google)"
Limit the max print event of trace_marker to just 4K string size. This must
also be less than the amount that can be held by a trace_seq along with
the text that is before the output (like the task name, PID, CPU, state,
etc). As trace_seq is made to ha
On Mon, 4 Mar 2024 21:48:44 -0500
Mathieu Desnoyers wrote:
> On 2024-03-04 21:37, Steven Rostedt wrote:
> > On Mon, 4 Mar 2024 21:35:38 -0500
> > Steven Rostedt wrote:
> >
> >>> And it's not for debugging, it's for validation of assumptions
> >>
On Mon, 4 Mar 2024 21:35:38 -0500
Steven Rostedt wrote:
> > And it's not for debugging, it's for validation of assumptions
> > made about an upper bound limit defined for a compile-time
> > check, so as the code evolves issues are caught early.
>
> validating is debug
On Mon, 4 Mar 2024 21:18:13 -0500
Mathieu Desnoyers wrote:
> On 2024-03-04 20:59, Steven Rostedt wrote:
> > On Mon, 4 Mar 2024 20:42:39 -0500
> > Mathieu Desnoyers wrote:
> >
> >> #define TRACE_OUTPUT_META_DATA_MAX_LEN 80
> >>
> >
On Mon, 4 Mar 2024 20:42:39 -0500
Mathieu Desnoyers wrote:
> #define TRACE_OUTPUT_META_DATA_MAX_LEN80
>
> and a runtime check in the code generating this header.
>
> This would avoid adding an unchecked upper limit.
That would be a lot of complex code that is for debugging
On Mon, 4 Mar 2024 20:36:28 -0500
Mathieu Desnoyers wrote:
> > <...>-999 [001] . 2296.140373: tracing_mark_write:
> > hello
> > ^^^
> > This is the meta data that is added to trace_seq
>
> If this
On Mon, 4 Mar 2024 20:35:16 -0500
Steven Rostedt wrote:
> > BUILD_BUG_ON(TRACING_MARK_MAX_SIZE + sizeof(meta data stuff...) >
> > TRACE_SEQ_SIZE);
>
> That's not the meta size I'm worried about. The sizeof(meta data) is the
> raw event binary data, which is
On Mon, 4 Mar 2024 20:15:57 -0500
Mathieu Desnoyers wrote:
> On 2024-03-04 19:27, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > Since the size of trace_seq's buffer is the max an event can output, have
> > the trace_marker be half of t
On Mon, 4 Mar 2024 16:43:46 -0800
Randy Dunlap wrote:
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 8198bfc54b58..d68544aef65f 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -7320,6 +7320,17 @@ tracing_mark_write(struct file *filp, const char
From: "Steven Rostedt (Google)"
Since the size of trace_seq's buffer is the max an event can output, have
the trace_marker be half of the entire TRACE_SEQ_SIZE, which is 4K. That
will keep writes that has meta data written from being dropped (but
reported), because the total output of
On Mon, 4 Mar 2024 18:55:00 -0500
Steven Rostedt wrote:
> On Mon, 4 Mar 2024 18:23:41 -0500
> Mathieu Desnoyers wrote:
>
> > It appears to currently be limited by
> >
> > #define TRACE_SEQ_BUFFER_SIZE (PAGE_SIZE * 2 - \
> > (sizeof(struct seq_b
From: "Steven Rostedt (Google)"
The trace_seq buffer is used to print out entire events. It's typically
set to PAGE_SIZE * 2 as there's some events that can be quite large.
As a side effect, writes to trace_marker is limited by both the size of the
trace_seq buffer as well as the rin
On Mon, 4 Mar 2024 18:23:41 -0500
Mathieu Desnoyers wrote:
> It appears to currently be limited by
>
> #define TRACE_SEQ_BUFFER_SIZE (PAGE_SIZE * 2 - \
> (sizeof(struct seq_buf) + sizeof(size_t) + sizeof(int)))
>
> checked within tracing_mark_write().
Yeah, I can hard code this to
From: "Steven Rostedt (Google)"
This reverts 60be76eeabb3d ("tracing: Add size check when printing
trace_marker output"). The only reason the precision check was added
was because of a bug that miscalculated the write size of the string into
the ring buffer and it t
On Fri, 1 Mar 2024 12:25:10 -0800
"Paul E. McKenney" wrote:
> > That would work for me. If there are no objections, I will make this
> > change.
>
> But I did check the latency of synchronize_rcu_tasks_rude() (about 100ms)
> and synchronize_rcu() (about 20ms). This is on a
On Fri, 1 Mar 2024 11:37:54 -0500
Mathieu Desnoyers wrote:
> On 2024-03-01 10:49, Steven Rostedt wrote:
> > On Fri, 1 Mar 2024 13:37:18 +0800
> > linke wrote:
> >
> >>> So basically you are worried about read-tearing?
> >>>
> >>>
On Fri, 1 Mar 2024 13:37:18 +0800
linke wrote:
> > So basically you are worried about read-tearing?
> >
> > That wasn't mentioned in the change log.
>
> Yes. Sorry for making this confused, I am not very familiar with this and
> still learning.
No problem. We all have to learn this anyway.
On Wed, 31 Jan 2024 14:47:31 +
David Howells wrote:
> Hi Steven,
Hi David,
Sorry, I just noticed this email as it was buried in other unread emails :-p
>
> I have a tracepoint in AF_RXRPC that displays information about a timeout I'm
> going to set. I have the timeout in a ktime_t as an
On Thu, 29 Feb 2024 20:32:26 +0800
linke wrote:
> Hi Steven, sorry for the late reply.
>
> >
> > Now the reason for the above READ_ONCE() is because the variables *are*
> > going to be used again. We do *not* want the compiler to play any games
> > with that.
> >
>
> I don't think it is
just might
> happen within a trampoline.
>
> Therefore, update ftrace_shutdown() to invoke synchronize_rcu_tasks()
> based on CONFIG_TASKS_RCU instead of CONFIG_PREEMPTION.
>
> Only build tested.
>
> Signed-off-by: Paul E. McKenney
> Cc: Steven Rostedt
> Cc: Ma
From: "Steven Rostedt (Google)"
There are two WARN_ON*() warnings in tracepoint.h that deal with RCU
usage. But when they trigger, especially from using a TRACE_EVENT()
macro, the information is not very helpful and is confusing:
[ cut here ]
WARNING: CPU
On Wed, 28 Feb 2024 10:52:52 -0500
Steven Rostedt wrote:
> The prototype of fchownat() is:
>
> int fchmodat(int dirfd, const char *pathname, mode_t mode, int flags);
>
> Where pathname is the third parameter, not the first, and mode is the third.
I meant pathname is the s
On Wed, 28 Feb 2024 17:25:40 +0300
Максим Морсков wrote:
> Dear colleagues,
> One last question — is it bug or feature that trobe event tracing can not
> correctly dereference string pointers from pt_regs?
> For example:
> echo 't:tmy_chmod sys_enter id=$arg2 filename=+8($arg1):string
>
nsigned long end,
> + unsigned long nr_migrated,
> + unsigned long nr_reclaimed,
> + unsigned long nr_mapped,
> + int migratetype),
Well, you didn't need to change the order of the parameters.
Anyway, from a tracing point of view:
From: "Steven Rostedt (Google)"
The trace_marker write goes into the ring buffer. A test was added to
write a string as big as the sub-buffer of the ring buffer to see if it
would work. A sub-buffer is typically PAGE_SIZE in length.
On PowerPC architecture, the ftrace selftest for tr
On Tue, 27 Feb 2024 10:50:36 +0800 (CST)
wrote:
> include/trace/events/icmp.h | 57
> +
> net/ipv4/icmp.c | 4
> 2 files changed, 61 insertions(+)
> create mode 100644 include/trace/events/icmp.h
>
> diff --git
On Sun, 25 Feb 2024 15:03:02 -0500
Steven Rostedt wrote:
> *But* looking at this deeper, the commit_page may need a READ_ONCE()
> but not for the reason you suggested.
>
> commit_page = cpu_buffer->commit_page;
> commit_ts = commit_page->page->time_sta
On Sat, 24 Feb 2024 13:52:06 +
chengming.z...@linux.dev wrote:
> From: Chengming Zhou
>
> The SLAB_MEM_SPREAD flag is already a no-op as of 6.8-rc1, remove
> its usage so we can delete it from slab. No functional change.
>
> Signed-off-by: Chengming Zhou
Queued.
Thanks!
-- Steve
> ---
On Mon, 26 Feb 2024 23:41:56 +0900
Masami Hiramatsu (Google) wrote:
> Hi,
> (Cc: linux-kernel-trace ML for sharing this knowledge)
>
> On Mon, 26 Feb 2024 16:36:29 +0300
> Максим Морсков wrote:
>
> >
> > Hello, dear Masami.
> > I am researching Linux event tracing subsystem in part of
On Mon, 26 Feb 2024 12:06:29 -0500
Steven Rostedt wrote:
> On Mon, 26 Feb 2024 10:00:15 +
> Richard Chang wrote:
>
> > alloc_contig_migrate_range has every information to be able to
> > understand big contiguous allocation latency. For example, how many
> > p
On Mon, 26 Feb 2024 09:33:28 +0900
Masami Hiramatsu (Google) wrote:
> On Fri, 23 Feb 2024 16:13:56 -0500
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > In preparation to remove the second parameter of __assign_str(), make sure
>
On Mon, 26 Feb 2024 10:00:15 +
Richard Chang wrote:
> alloc_contig_migrate_range has every information to be able to
> understand big contiguous allocation latency. For example, how many
> pages are migrated, how many times they were needed to unmap from
> page tables.
>
> This patch adds
On Sun, 25 Feb 2024 11:05:06 +0800
linke li wrote:
> In function ring_buffer_iter_empty(), cpu_buffer->commit_page and
> curr_commit_page->page->time_stamp is read using READ_ONCE() in
> line 4354, 4355
>
> 4354curr_commit_page = READ_ONCE(cpu_buffer->commit_page);
> 4355curr_commit_ts
From: "Steven Rostedt (Google)"
The second parameter of __assign_rel_str() is no longer used. It can be removed.
Note, the only real users of rel_string is user events. This code is just
in the sample code for testing purposes.
This makes __assign_rel_str() different than __
From: "Steven Rostedt (Google)"
In preparation to remove the second parameter of __assign_str(), make sure
it is really a duplicate of __string() by adding a WARN_ON_ONCE().
Signed-off-by: Steven Rostedt (Google)
---
Changes since v1:
https://lore.kernel.org/linux-tr
From: "Steven Rostedt (Google)"
In preparation to remove the second parameter of __assign_str(), make sure
it is really a duplicate of __string() by adding a WARN_ON_ONCE().
Signed-off-by: Steven Rostedt (Google)
---
include/trace/stages/stage6_event_callback.h | 1 +
1 file
On Fri, 23 Feb 2024 13:46:53 -0500
Steven Rostedt wrote:
> Now one thing I could do is to not remove the parameter, but just add:
>
> WARN_ON_ONCE((src) != __data_offsets->item##_ptr_);
>
> in the __assign_str() macro to make sure that it's still the same that is
&
From: "Steven Rostedt (Google)"
There's no example code that uses __string_len(), and since the sample
code is used for testing the event logic, add a use case.
Signed-off-by: Steven Rostedt (Google)
---
samples/trace_events/trace-events-sample.h | 7 +--
1 file changed, 5
From: "Steven Rostedt (Google)"
Now that __assign_str() gets the length from the __string() (and
__string_len()) macros, there's no reason to have a separate
__assign_str_len() macro as __assign_str() can get the length of the
string needed.
Also remove __assign_rel_str() altho
On Fri, 23 Feb 2024 14:50:49 -0500
Kent Overstreet wrote:
> Tangentially related though, what would make me really happy is if we
> could create the string with in the TP__fast_assign() section. I have to
> have a bunch of annoying wrappers right now because the string length
> has to be known
From: "Steven Rostedt (Google)"
Now that __assign_str() gets the length from the __string() (and
__string_len()) macros, there's no reason to have a separate
__assign_str_len() macro as __assign_str() can get the length of the
string needed.
Signed-off-by: Steven Rostedt (Google)
--
On Fri, 23 Feb 2024 10:30:45 -0800
Jeff Johnson wrote:
> On 2/23/2024 9:56 AM, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > [
> >This is a treewide change. I will likely re-create this patch again in
> >the second
On Fri, 23 Feb 2024 12:56:34 -0500
Steven Rostedt wrote:
> Note, the same updates will need to be done for:
>
> __assign_str_len()
> __assign_rel_str()
> __assign_rel_str_len()
Correction: The below macros do not pass in their source to the entry
macros, so the
From: "Steven Rostedt (Google)"
Running the ftrace selftests caused the ring buffer mapping test to fail.
Investigating, I found that the snapshot counter would be incremented
every time a tracer that uses the snapshot is enabled even if the snapshot
was used by the previ
The ring buffer mapping test failed after running the ftrace tests.
This was due to some mismatched snapshot accounting that left the
snapshot counter enabled when it was not, which prevents the ring buffer
from being mapped.
Steven Rostedt (Google) (2):
tracing: Fix snapshot counter
From: "Steven Rostedt (Google)"
Running the ftrace selftests caused the ring buffer mapping test to fail.
Investigating, I found that the snapshot counter would be incremented
every time a snapshot trigger was added, even if that snapshot trigger
failed.
# cd /sys/kernel/traci
On Thu, 22 Feb 2024 00:18:05 +
Beau Belgrave wrote:
> Currently user_events supports 1 event with the same name and must have
> the exact same format when referenced by multiple programs. This opens
> an opportunity for malicous or poorly thought through programs to
malicious? ;-)
--
301 - 400 of 34317 matches
Mail list logo