nsigned long end,
> + unsigned long nr_migrated,
> + unsigned long nr_reclaimed,
> + unsigned long nr_mapped,
> + int migratetype),
Well, you didn't need to change the order of the parameters.
Anyway, from a tracing point of view:
From: "Steven Rostedt (Google)"
The trace_marker write goes into the ring buffer. A test was added to
write a string as big as the sub-buffer of the ring buffer to see if it
would work. A sub-buffer is typically PAGE_SIZE in length.
On PowerPC architecture, the ftrace selftest for tr
On Tue, 27 Feb 2024 10:50:36 +0800 (CST)
wrote:
> include/trace/events/icmp.h | 57
> +
> net/ipv4/icmp.c | 4
> 2 files changed, 61 insertions(+)
> create mode 100644 include/trace/events/icmp.h
>
> diff --git
On Sun, 25 Feb 2024 15:03:02 -0500
Steven Rostedt wrote:
> *But* looking at this deeper, the commit_page may need a READ_ONCE()
> but not for the reason you suggested.
>
> commit_page = cpu_buffer->commit_page;
> commit_ts = commit_page->page->time_sta
On Sat, 24 Feb 2024 13:52:06 +
chengming.z...@linux.dev wrote:
> From: Chengming Zhou
>
> The SLAB_MEM_SPREAD flag is already a no-op as of 6.8-rc1, remove
> its usage so we can delete it from slab. No functional change.
>
> Signed-off-by: Chengming Zhou
Queued.
Thanks!
-- Steve
> ---
On Mon, 26 Feb 2024 23:41:56 +0900
Masami Hiramatsu (Google) wrote:
> Hi,
> (Cc: linux-kernel-trace ML for sharing this knowledge)
>
> On Mon, 26 Feb 2024 16:36:29 +0300
> Максим Морсков wrote:
>
> >
> > Hello, dear Masami.
> > I am researching Linux event tracing subsystem in part of
On Mon, 26 Feb 2024 12:06:29 -0500
Steven Rostedt wrote:
> On Mon, 26 Feb 2024 10:00:15 +
> Richard Chang wrote:
>
> > alloc_contig_migrate_range has every information to be able to
> > understand big contiguous allocation latency. For example, how many
> > p
On Mon, 26 Feb 2024 09:33:28 +0900
Masami Hiramatsu (Google) wrote:
> On Fri, 23 Feb 2024 16:13:56 -0500
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > In preparation to remove the second parameter of __assign_str(), make sure
>
On Mon, 26 Feb 2024 10:00:15 +
Richard Chang wrote:
> alloc_contig_migrate_range has every information to be able to
> understand big contiguous allocation latency. For example, how many
> pages are migrated, how many times they were needed to unmap from
> page tables.
>
> This patch adds
On Sun, 25 Feb 2024 11:05:06 +0800
linke li wrote:
> In function ring_buffer_iter_empty(), cpu_buffer->commit_page and
> curr_commit_page->page->time_stamp is read using READ_ONCE() in
> line 4354, 4355
>
> 4354curr_commit_page = READ_ONCE(cpu_buffer->commit_page);
> 4355curr_commit_ts
From: "Steven Rostedt (Google)"
The second parameter of __assign_rel_str() is no longer used. It can be removed.
Note, the only real users of rel_string is user events. This code is just
in the sample code for testing purposes.
This makes __assign_rel_str() different than __
From: "Steven Rostedt (Google)"
In preparation to remove the second parameter of __assign_str(), make sure
it is really a duplicate of __string() by adding a WARN_ON_ONCE().
Signed-off-by: Steven Rostedt (Google)
---
Changes since v1:
https://lore.kernel.org/linux-tr
From: "Steven Rostedt (Google)"
In preparation to remove the second parameter of __assign_str(), make sure
it is really a duplicate of __string() by adding a WARN_ON_ONCE().
Signed-off-by: Steven Rostedt (Google)
---
include/trace/stages/stage6_event_callback.h | 1 +
1 file
On Fri, 23 Feb 2024 13:46:53 -0500
Steven Rostedt wrote:
> Now one thing I could do is to not remove the parameter, but just add:
>
> WARN_ON_ONCE((src) != __data_offsets->item##_ptr_);
>
> in the __assign_str() macro to make sure that it's still the same that is
&
From: "Steven Rostedt (Google)"
There's no example code that uses __string_len(), and since the sample
code is used for testing the event logic, add a use case.
Signed-off-by: Steven Rostedt (Google)
---
samples/trace_events/trace-events-sample.h | 7 +--
1 file changed, 5
From: "Steven Rostedt (Google)"
Now that __assign_str() gets the length from the __string() (and
__string_len()) macros, there's no reason to have a separate
__assign_str_len() macro as __assign_str() can get the length of the
string needed.
Also remove __assign_rel_str() altho
On Fri, 23 Feb 2024 14:50:49 -0500
Kent Overstreet wrote:
> Tangentially related though, what would make me really happy is if we
> could create the string with in the TP__fast_assign() section. I have to
> have a bunch of annoying wrappers right now because the string length
> has to be known
From: "Steven Rostedt (Google)"
Now that __assign_str() gets the length from the __string() (and
__string_len()) macros, there's no reason to have a separate
__assign_str_len() macro as __assign_str() can get the length of the
string needed.
Signed-off-by: Steven Rostedt (Google)
--
On Fri, 23 Feb 2024 10:30:45 -0800
Jeff Johnson wrote:
> On 2/23/2024 9:56 AM, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > [
> >This is a treewide change. I will likely re-create this patch again in
> >the second
On Fri, 23 Feb 2024 12:56:34 -0500
Steven Rostedt wrote:
> Note, the same updates will need to be done for:
>
> __assign_str_len()
> __assign_rel_str()
> __assign_rel_str_len()
Correction: The below macros do not pass in their source to the entry
macros, so the
From: "Steven Rostedt (Google)"
Running the ftrace selftests caused the ring buffer mapping test to fail.
Investigating, I found that the snapshot counter would be incremented
every time a tracer that uses the snapshot is enabled even if the snapshot
was used by the previ
The ring buffer mapping test failed after running the ftrace tests.
This was due to some mismatched snapshot accounting that left the
snapshot counter enabled when it was not, which prevents the ring buffer
from being mapped.
Steven Rostedt (Google) (2):
tracing: Fix snapshot counter
From: "Steven Rostedt (Google)"
Running the ftrace selftests caused the ring buffer mapping test to fail.
Investigating, I found that the snapshot counter would be incremented
every time a snapshot trigger was added, even if that snapshot trigger
failed.
# cd /sys/kernel/traci
On Thu, 22 Feb 2024 00:18:05 +
Beau Belgrave wrote:
> Currently user_events supports 1 event with the same name and must have
> the exact same format when referenced by multiple programs. This opens
> an opportunity for malicous or poorly thought through programs to
malicious? ;-)
--
On Thu, 22 Feb 2024 00:18:04 +
Beau Belgrave wrote:
> The current code for finding and deleting events assumes that there will
> never be cases when user_events are registered with the same name, but
> different formats. Scenarios exist where programs want to use the same
> name but have
From: "Steven Rostedt (Google)"
The TRACE_EVENT macros has some dependency if a __string() field is NULL,
where it will save "(null)" as the string. This string is also used by
__assign_str(). It's better to create a single macro instead of having
something tha
From: "Steven Rostedt (Google)"
Instead of having:
#define __assign_str(dst, src)\
memcpy(__get_str(dst), __data_offsets.dst##_ptr_ ? \
__data_offsets.dst##_ptr
From: "Steven Rostedt (Google)"
The TRACE_EVENT() macro handles dynamic strings by having:
TP_PROTO(struct some_struct *s),
TP_ARGS(s),
TP_STRUCT__entry(
__string(my_string, s->string)
),
TP_fast_assign(
__assign_str(my_string, s->string);
)
TP_printk
From: "Steven Rostedt (Google)"
The TRACE_EVENT() macro handles dynamic strings by having:
TP_PROTO(struct some_struct *s),
TP_ARGS(s),
TP_STRUCT__entry(
__string(my_string, s->string)
),
TP_fast_assign(
__assign_str(my_string, s->string);
)
TP_printk
t be consistent between __string() and __assign_str().
Steven Rostedt (Google) (4):
tracing: Rework __assign_str() and __string() to not duplicate getting
the string
tracing: Do not calculate strlen() twice for __string() fields
tracing: Use ? : shortcut in trace macros
tracing:
From: "Steven Rostedt (Google)"
The TRACE_EVENT() macro handles dynamic strings by having:
TP_PROTO(struct some_struct *s),
TP_ARGS(s),
TP_STRUCT__entry(
__string(my_string, s->string)
),
TP_fast_assign(
__assign_str(my_string, s->string);
)
TP_printk
h of the string fields. Instead of
finding the string twice, just save it off in another field in that helper
structure, and have __assign_str() use that instead.
Steven Rostedt (Google) (2):
tracing: Rework __assign_str() and __string() to not duplicate getting
the string
tracing: D
From: "Steven Rostedt (Google)"
The TRACE_EVENT() macro handles dynamic strings by having:
TP_PROTO(struct some_struct *s),
TP_ARGS(s),
TP_STRUCT__entry(
__string(my_string, s->string)
),
TP_fast_assign(
__assign_str(my_string, s->string);
)
TP_printk
On Thu, 22 Feb 2024 13:25:34 -0500
Chuck Lever wrote:
> Do you want me to take this through the nfsd tree, or would you like
> an Ack from me so you can handle it as part of your clean up? Just
> in case:
>
> Acked-by: Chuck Lever
>
As my patches depend on this, I can take it with your ack.
From: "Steven Rostedt (Google)"
I'm working on restructuring the __string* macros so that it doesn't need
to recalculate the string twice. That is, it will save it off when
processing __string() and the __assign_str() will not need to do the work
again as it currently does.
On Wed, 21 Feb 2024 09:57:03 -0800
Vilas Bhat wrote:
> > You could do what everyone else does:
> >
> > #define RPM_STATUS_STRINGS \
> > EM( RPM_INVALID, "RPM_INVALID" )\
> > EM( RPM_ACTIVE, "RPM_ACTIVE" ) \
> > EM( RPM_RESUMING,
On Wed, 21 Feb 2024 16:41:10 +
Vilas Bhat wrote:
> diff --git a/include/trace/events/rpm.h b/include/trace/events/rpm.h
> index 3c716214dab1..f1dc4e95dbce 100644
> --- a/include/trace/events/rpm.h
> +++ b/include/trace/events/rpm.h
> @@ -101,6 +101,42 @@ TRACE_EVENT(rpm_return_int,
>
On Wed, 14 Feb 2024 17:50:44 +
Beau Belgrave wrote:
> Currently user_events supports 1 event with the same name and must have
> the exact same format when referenced by multiple programs. This opens
> an opportunity for malicous or poorly thought through programs to
> create events that
On Wed, 14 Feb 2024 17:50:43 +
Beau Belgrave wrote:
So the patches look good, but since I gave you some updates, I'm now going
to go though "nits". Like grammar and such ;-)
> The current code for finding and deleting events assumes that there will
> never be cases when user_events are
On Wed, 14 Feb 2024 17:50:44 +
Beau Belgrave wrote:
> +static char *user_event_group_system_multi_name(void)
> +{
> + char *system_name;
> + int len = sizeof(USER_EVENTS_MULTI_SYSTEM) + 1;
FYI, the sizeof() will include the "\0" so no need for "+ 1", but I don't
think this matters
On Fri, 2 Feb 2024 08:33:38 +
Metin Kaya wrote:
> Add sched_[start, finish]_task_selection trace events to measure the
> latency of PE patches in task selection.
>
> Moreover, introduce trace events for interesting events in PE:
> 1. sched_pe_enqueue_sleeping_task: a task gets enqueued on
So add trace point strings for the user space tools to map strings
> > properly.
> >
> > Signed-off-by: Krishna chaitanya chundru
>
> Reported-by: Steven Rostedt
Suggested-by: may be more accurate?
-- Steve
> Reviewed-by: Manivannan Sadhasivam
swapd0 super_cache_scan.cfi_jt 0
> > 2247 8524 1024
> > 7 kswapd0 super_cache_scan.cfi_jt 23670
> >00
> >
> > For this, add the new tracer to shrink_active_list/shrink_ina
On Tue, 20 Feb 2024 10:40:23 -0500
Steven Rostedt wrote:
> > Try resetting the info->add_timestamp flags to add_ts_default on goto again
> > within __rb_reserve_next().
> >
>
> I was looking at that too, but I don't know how it will make a difference.
>
> N
On Tue, 20 Feb 2024 09:50:13 -0500
Mathieu Desnoyers wrote:
> On 2024-02-20 09:19, Steven Rostedt wrote:
> > On Mon, 19 Feb 2024 18:20:32 -0500
> > Steven Rostedt wrote:
> >
> >> Instead of using local_add_return() to reserve the ring buffer data,
> >
From: "Steven Rostedt (Google)"
The data on the subbuffer is measured by a write variable that also
contains status flags. The counter is just 20 bits in length. If the
subbuffer is bigger than then counter, it will fail.
Make sure that the subbuffer can not be set to greater than t
On Mon, 19 Feb 2024 18:20:32 -0500
Steven Rostedt wrote:
> Instead of using local_add_return() to reserve the ring buffer data,
> Mathieu Desnoyers suggested using local_cmpxchg(). This would simplify the
> reservation with the time keeping code.
>
> Although, it does not get ri
From: "Steven Rostedt (Google)"
The code that handles saved_cmdlines is split between the trace.c file and
the trace_sched_switch.c. There's some history to this. The
trace_sched_switch.c was originally created to handle the sched_switch
tracer that was deprecated due to sched_switch t
From: "Steven Rostedt (Google)"
In preparation of moving the saved_cmdlines logic out of trace.c and into
trace_sched_switch.c, replace the open coded manipulation of tgid_map in
set_tracer_flag() into a helper function trace_alloc_tgid_map() so that it
can be ea
From: "Steven Rostedt (Google)"
The saved_cmdlines have three arrays for mapping PIDs to COMMs:
- map_pid_to_cmdline[]
- map_cmdline_to_pid[]
- saved_cmdlines
The map_pid_to_cmdline[] is PID_MAX_DEFAULT in size and holds the index
into the other arrays. The map_cmdline_to_pid[] is
/20240216210047.584712...@goodmis.org/
- The map_cmdline_to_pid field was moved into the pages allocated of the
structure and that replaced the kmalloc. But that field still had
kfree() called on it in the freeing of the structure which caused
a memory corruption.
Steven Rostedt (Google) (3
From: "Steven Rostedt (Google)"
Instead of using local_add_return() to reserve the ring buffer data,
Mathieu Desnoyers suggested using local_cmpxchg(). This would simplify the
reservation with the time keeping code.
Although, it does not get rid of the double time stamps (be
On Mon, 19 Feb 2024 17:30:03 -0500
Steven Rostedt wrote:
> - /*C*/ write = local_add_return(info->length, _page->write);
> + /*C*/ if (!local_try_cmpxchg(_page->write, , w +
> info->length)) {
> + if (info.add_timestamp & (RB_ADD_STAMP_FO
From: "Steven Rostedt (Google)"
Instead of using local_add_return() to reserve the ring buffer data,
Mathieu Desnoyers suggested using local_cmpxchg(). This would simplify the
reservation with the time keeping code.
Although, it does not get rid of the double time stamps (be
On Mon, 19 Feb 2024 13:17:54 -0500
Steven Rostedt wrote:
> On Tue, 13 Feb 2024 11:49:42 +
> Vincent Donnefort wrote:
>
> > @@ -9678,7 +9739,9 @@ trace_array_create_systems(const char *name, const
> > char *systems)
> > raw_spin_lock_init(>start_loc
On Tue, 13 Feb 2024 11:49:42 +
Vincent Donnefort wrote:
> @@ -9678,7 +9739,9 @@ trace_array_create_systems(const char *name, const char
> *systems)
> raw_spin_lock_init(>start_lock);
>
> tr->max_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
> -
> +#ifdef
On Wed, 7 Feb 2024 00:11:34 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Add a new entry handler to fgraph_ops as 'entryregfunc' which takes
> parent_ip and ftrace_regs. Note that the 'entryfunc' and 'entryregfunc'
> are mutual exclusive. You can set only
From: "Steven Rostedt (Google)"
The code that handles saved_cmdlines is split between the trace.c file and
the trace_sched_switch.c. There's some history to this. The
trace_sched_switch.c was originally created to handle the sched_switch
tracer that was deprecated due to sched_switch t
From: "Steven Rostedt (Google)"
In preparation of moving the saved_cmdlines logic out of trace.c and into
trace_sched_switch.c, replace the open coded manipulation of tgid_map in
set_tracer_flag() into a helper function trace_alloc_tgid_map() so that it
can be ea
From: "Steven Rostedt (Google)"
The saved_cmdlines have three arrays for mapping PIDs to COMMs:
- map_pid_to_cmdline[]
- map_cmdline_to_pid[]
- saved_cmdlines
The map_pid_to_cmdline[] is PID_MAX_DEFAULT in size and holds the index
into the other arrays. The map_cmdline_to_pid[] is
aved_cmdlines update to consolidate memory.
The second patch removes some open coded saved_cmdlines logic in trace.c
into a helper function to make it a cleaner move.
The last patch simply moves the code from trace.c into trace_sched_switch.c
Steven Rostedt (Google) (3):
tracing: Have saved
On Fri, 16 Feb 2024 22:09:02 +0900
Masami Hiramatsu (Google) wrote:
> On Thu, 15 Feb 2024 11:11:34 -0500
> Steven Rostedt wrote:
>
> > On Wed, 7 Feb 2024 00:12:40 +0900
> > "Masami Hiramatsu (Google)" wrote:
> >
> > > From: Masami Hiramatsu
On Wed, 7 Feb 2024 00:12:40 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Add ftrace_partial_regs() which converts the ftrace_regs to pt_regs.
> If the architecture defines its own ftrace_regs, this copies partial
> registers to pt_regs and returns it. If not,
On Wed, 7 Feb 2024 00:12:06 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Enable CONFIG_HAVE_FUNCTION_GRAPH_FREGS on arm64. Note that this
> depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS which is enabled if the
> compiler supports "-fpatchable-function-entry=2". If
On Wed, 7 Feb 2024 00:11:56 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Support HAVE_FUNCTION_GRAPH_FREGS on x86-64, which saves ftrace_regs
> on the stack in ftrace_graph return trampoline so that the callbacks
> can access registers via ftrace_regs APIs.
>
On Wed, 7 Feb 2024 00:11:44 +0900
"Masami Hiramatsu (Google)" wrote:
> diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> index c88bf47f46da..a061f8832b20 100644
> --- a/arch/x86/include/asm/ftrace.h
> +++ b/arch/x86/include/asm/ftrace.h
> @@ -72,6 +72,8 @@
On Wed, 7 Feb 2024 00:11:44 +0900
"Masami Hiramatsu (Google)" wrote:
> diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> index 61c541c36596..308b3bec01b1 100644
> --- a/kernel/trace/Kconfig
> +++ b/kernel/trace/Kconfig
> @@ -34,6 +34,9 @@ config HAVE_FUNCTION_GRAPH_TRACER
> config
On Wed, 7 Feb 2024 00:11:44 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Add a new return handler to fgraph_ops as 'retregfunc' which takes
> parent_ip and ftrace_regs instead of ftrace_graph_ret. This handler
> is available only if the arch support
On Wed, 7 Feb 2024 00:11:34 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Add a new entry handler to fgraph_ops as 'entryregfunc' which takes
> parent_ip and ftrace_regs. Note that the 'entryfunc' and 'entryregfunc'
> are mutual exclusive. You can set only
On Wed, 7 Feb 2024 00:11:22 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Steven Rostedt (VMware)
>
> Add boot up selftest that passes variables from a function entry to a
> function exit, and make sure that they do get passed around.
>
> Signed-off-by: Steve
On Wed, 7 Feb 2024 00:11:12 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Improve push and data reserve operation on the shadow stack for
> several sequencial interrupts.
>
> To push a ret_stack or data entry on the shadow stack, we need to
> prepare an index
On Thu, 15 Feb 2024 08:45:52 +0900
Masami Hiramatsu (Google) wrote:
> > Hmm, the above is a fast path. I wonder if we should add a patch to make
> > that into:
> >
> > if (unlikely(size_bytes & (sizeof(long) - 1)))
> > data_size = DIV_ROUND_UP(size_bytes, sizeof(long));
> >
On Wed, 14 Feb 2024 14:19:19 -0800
Ira Weiny wrote:
> > > Jonathan Cameron wrote:
> > >
> > > > So I'm thinking this is a won't fix - wait for the printk rework to
> > > > land and
> > > > assume this will be resolved as well?
> > >
> > > That pretty much sums up what I was about to
On Wed, 7 Feb 2024 00:11:01 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Ste
> +/**
> + * fgraph_reserve_data - Reserve storage on the task's ret_stack
> + * @idx: The index of fgraph_array
> + * @size_bytes: The size in bytes to reserve
> + *
> + * Reserves space of up to
On Wed, 7 Feb 2024 00:10:04 +0900
"Masami Hiramatsu (Google)" wrote:
> diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
> index ae42de909845..323a74623543 100644
> --- a/kernel/trace/fgraph.c
> +++ b/kernel/trace/fgraph.c
> @@ -99,10 +99,44 @@ enum {
>
On Wed, 14 Feb 2024 17:30:40 +0200
Kalle Valo wrote:
> Although the patch didn't apply for me as in my tree the functions are
> in kernel/trace/trace.c. I don't know what happened so as a quick hack I
> just manually added the three lines to my version of trace.c. Let me
> know if there's a git
From: "Steven Rostedt (Google)"
The allocation of the struct saved_cmdlines_buffer structure changed from:
s = kmalloc(sizeof(*s), GFP_KERNEL);
s->saved_cmdlines = kmalloc_array(TASK_COMM_LEN, val, GFP_KERNEL);
to:
orig_size = sizeof(*s) + val *
On Wed, 14 Feb 2024 17:30:40 +0200
Kalle Valo wrote:
> Although the patch didn't apply for me as in my tree the functions are
> in kernel/trace/trace.c. I don't know what happened so as a quick hack I
> just manually added the three lines to my version of trace.c. Let me
> know if there's a git
From: "Steven Rostedt (Google)"
The allocation of the struct saved_cmdlines_buffer structure changed from:
s = kmalloc(sizeof(*s), GFP_KERNEL);
s->saved_cmdlines = kmalloc_array(TASK_COMM_LEN, val, GFP_KERNEL);
to:
orig_size = sizeof(*s) + val *
On Wed, 14 Feb 2024 12:11:53 +
Jonathan Cameron wrote:
> So I'm thinking this is a won't fix - wait for the printk rework to land and
> assume this will be resolved as well?
That pretty much sums up what I was about to say ;-)
tp_printk is more of a hack and not to be used sparingly. With
On Wed, 14 Feb 2024 14:50:56 +0200
Kalle Valo wrote:
> Hi Steven,
>
> I upgraded our ath11k test setup to v6.8-rc4 and noticed a new kmemleak
> warning in the log:
Thanks for the report.
>
> unreferenced object 0x8881010c8000 (size 32760):
> comm "swapper", pid 0, jiffies 4294667296
>
On Wed, 7 Feb 2024 00:09:21 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Steven Rostedt (VMware)
>
> Pass the fgraph_ops structure to the function graph callbacks. This will
> allow callbacks to add a descriptor to a fgraph_ops private field that wil
> be ad
On Tue, 13 Feb 2024 15:53:09 -0500
Steven Rostedt wrote:
> On Tue, 13 Feb 2024 11:49:41 +
> Vincent Donnefort wrote:
>
> Did you test with lockdep?
>
> > +static int __rb_inc_dec_mapped(struct trace_buffer *buffer,
> > + struct ring
On Tue, 13 Feb 2024 11:49:41 +
Vincent Donnefort wrote:
Did you test with lockdep?
> +static int __rb_inc_dec_mapped(struct trace_buffer *buffer,
> +struct ring_buffer_per_cpu *cpu_buffer,
> +bool inc)
> +{
> + unsigned long flags;
From: "Steven Rostedt (Google)"
The saved_cmdlines have three arrays for mapping PIDs to COMMs:
- map_pid_to_cmdline[]
- map_cmdline_to_pid[]
- saved_cmdlines
The map_pid_to_cmdline[] is PID_MAX_DEFAULT in size and holds the index
into the other arrays. The map_cmdline_to_pid[] is
On Mon, 12 Feb 2024 15:39:03 -0800
Tim Chen wrote:
> > diff --git a/kernel/trace/trace_sched_switch.c
> > b/kernel/trace/trace_sched_switch.c
> > index e4fbcc3bede5..210c74dcd016 100644
> > --- a/kernel/trace/trace_sched_switch.c
> > +++ b/kernel/trace/trace_sched_switch.c
> > @@ -201,7 +201,7
On Mon, 12 Feb 2024 23:54:00 +0100
Mete Durlu wrote:
> On 2/12/24 19:53, Steven Rostedt wrote:
> >
> > Right, it will definitely force the race window to go away.
> >
> > Can you still trigger this issue with just Sven's patch and not this
> > change?
&g
From: "Steven Rostedt (Google)"
The saved_cmdlines have three arrays for mapping PIDs to COMMs:
- map_pid_to_cmdline[]
- map_cmdline_to_pid[]
- saved_cmdlines
The map_pid_to_cmdline[] is PID_MAX_DEFAULT in size and holds the index
into the other arrays. The map_cmdline_to_pid[] is
On Mon, 12 Feb 2024 14:08:29 -0800
Tim Chen wrote:
> > Now, instead of saving only 128 comms by default, by using this wasted
> > space at the end of the structure it can save over 8000 comms and even
> > saves space by removing the need for allocating the other array.
>
> The change looks
On Mon, 12 Feb 2024 10:44:26 +
Vincent Donnefort wrote:
> > > static void
> > > rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
> > > {
> > > @@ -5204,6 +5227,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
> > > cpu_buffer->lost_events = 0;
> > >
On Thu, 8 Feb 2024 11:25:50 +0100
Mete Durlu wrote:
> I have been only able to reliably reproduce this issue when the system
> is under load from stressors. But I am not sure if it can be considered
> as *really stressed*.
>
> system : 8 cpus (4 physical cores)
> load : stress-ng --fanotify 1
On Tue, 6 Feb 2024 10:02:05 +0530
Krishna chaitanya chundru wrote:
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index abb561db9ae1..2d38f6005da6 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -15,6 +15,7 @@
> #include
>
On Tue, 6 Feb 2024 15:01:10 +0530
Manivannan Sadhasivam wrote:
> > Bot will check sparse warnings/errors mostly. But these checkpatch issues
> > can be
> > fixed easily. If you don't do it now, then someone will send a patch for it
> > later.
> >
>
> Hmm, seems like we should ignore these
On Tue, 13 Feb 2024 00:40:38 +0900
Masami Hiramatsu (Google) wrote:
> > Now, instead of saving only 128 comms by default, by using this wasted
> > space at the end of the structure it can save over 8000 comms and even
> > saves space by removing the need for allocating the other array.
>
>
On Fri, 9 Feb 2024 16:34:47 +
Vincent Donnefort wrote:
> It is now possible to mmap() a ring-buffer to stream its content. Add
> some documentation and a code example.
>
> Signed-off-by: Vincent Donnefort
>
> diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
>
On Fri, 9 Feb 2024 16:34:46 +
Vincent Donnefort wrote:
> +static void tracing_buffers_mmap_close(struct vm_area_struct *vma)
> +{
> + struct ftrace_buffer_info *info = vma->vm_file->private_data;
> + struct trace_iterator *iter = >iter;
> + struct trace_array __maybe_unused *tr
On Fri, 9 Feb 2024 16:34:44 +
Vincent Donnefort wrote:
I have some comment updates, but I also notice a need to change the
code slightly. Nothing major, but enough to perhaps have a v17.
>
> diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
> index
On Fri, 9 Feb 2024 10:21:58 +
Vincent Donnefort wrote:
> +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer)
> +{
> + struct trace_buffer_meta *meta = cpu_buffer->meta_page;
> +
> + WRITE_ONCE(meta->reader.read, cpu_buffer->reader_page->read);
> +
From: "Steven Rostedt (Google)"
While looking at improving the saved_cmdlines cache I found a huge amount
of wasted memory that should be used for the cmdlines.
The tracing data saves pids during the trace. At sched switch, if a trace
occurred, it will save the comm of the tas
the tracepoint.
>
> The second patch adds the microcode field (Microcode Revision) to the
> tracepoint.
From a tracing POV only:
Reviewed-by: Steven Rostedt (Google)
-- Steve
501 - 600 of 34441 matches
Mail list logo