Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
On Sat, Jun 25, 2016 at 2:29 AM, Steven Rostedt wrote: > On Sat, 25 Jun 2016 01:15:34 +0900 > Namhyung Kim wrote: > >> On Fri, Jun 24, 2016 at 12:04:40PM -0400, Steven Rostedt wrote: >> > On Fri, 24 Jun 2016 15:35:44 +0900 >> > Namhyung Kim wrote: >> > >> > >> > > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h >> > > > > index dea12a6e413b..35c523ba5c59 100644 >> > > > > --- a/include/linux/ftrace.h >> > > > > +++ b/include/linux/ftrace.h >> > > > > @@ -751,25 +751,33 @@ extern void ftrace_init(void); >> > > > > static inline void ftrace_init(void) { } >> > > > > #endif >> > > > > >> > > > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS >> > > > > +# define FTRACE_ALIGNMENT4 >> > > > > +#else >> > > > > +# define FTRACE_ALIGNMENT8 >> > > > > +#endif >> > > > >> >> As far as I can see, the ring buffer has following code in ring_buffer.c: >> >> #define RB_ALIGNMENT4U >> #define RB_MAX_SMALL_DATA (RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX) >> #define RB_EVNT_MIN_SIZE8U /* two 32bit words */ >> >> #ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS >> # define RB_FORCE_8BYTE_ALIGNMENT 0 >> # define RB_ARCH_ALIGNMENT RB_ALIGNMENT >> #else >> # define RB_FORCE_8BYTE_ALIGNMENT 1 >> # define RB_ARCH_ALIGNMENT 8U >> #endif >> >> #define RB_ALIGN_DATA __aligned(RB_ARCH_ALIGNMENT) >> > > Right, what I meant was that we should just define FTRACE_ALIGNMENT > unconditionally to 4. If CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set, > it will add the buffered space regardless. > > You already moved "overrun", I don't see anything that would be out of > alignment if the structure itself is aligned. In that case if CONFIG_HAVE_64BIT_ALIGNED_ACCESS is set, the ring buffer is 8-byte aligned but the struct is 4-byte aligned, right? Note that the function graph tracer saves the data in a local variable (of the struct) first and copies to the ring buffer later. Wouldn't it be a problem? Thanks, Namhyung
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
On Sat, 25 Jun 2016 01:15:34 +0900 Namhyung Kim wrote: > On Fri, Jun 24, 2016 at 12:04:40PM -0400, Steven Rostedt wrote: > > On Fri, 24 Jun 2016 15:35:44 +0900 > > Namhyung Kim wrote: > > > > > > > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > > > > > index dea12a6e413b..35c523ba5c59 100644 > > > > > --- a/include/linux/ftrace.h > > > > > +++ b/include/linux/ftrace.h > > > > > @@ -751,25 +751,33 @@ extern void ftrace_init(void); > > > > > static inline void ftrace_init(void) { } > > > > > #endif > > > > > > > > > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > > > > > +# define FTRACE_ALIGNMENT4 > > > > > +#else > > > > > +# define FTRACE_ALIGNMENT8 > > > > > +#endif > > > > > > As far as I can see, the ring buffer has following code in ring_buffer.c: > > #define RB_ALIGNMENT4U > #define RB_MAX_SMALL_DATA (RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX) > #define RB_EVNT_MIN_SIZE8U /* two 32bit words */ > > #ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > # define RB_FORCE_8BYTE_ALIGNMENT 0 > # define RB_ARCH_ALIGNMENT RB_ALIGNMENT > #else > # define RB_FORCE_8BYTE_ALIGNMENT 1 > # define RB_ARCH_ALIGNMENT 8U > #endif > > #define RB_ALIGN_DATA __aligned(RB_ARCH_ALIGNMENT) > Right, what I meant was that we should just define FTRACE_ALIGNMENT unconditionally to 4. If CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set, it will add the buffered space regardless. You already moved "overrun", I don't see anything that would be out of alignment if the structure itself is aligned. -- Steve
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
On Fri, Jun 24, 2016 at 12:04:40PM -0400, Steven Rostedt wrote: > On Fri, 24 Jun 2016 15:35:44 +0900 > Namhyung Kim wrote: > > > > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > > > > index dea12a6e413b..35c523ba5c59 100644 > > > > --- a/include/linux/ftrace.h > > > > +++ b/include/linux/ftrace.h > > > > @@ -751,25 +751,33 @@ extern void ftrace_init(void); > > > > static inline void ftrace_init(void) { } > > > > #endif > > > > > > > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > > > > +# define FTRACE_ALIGNMENT 4 > > > > +#else > > > > +# define FTRACE_ALIGNMENT 8 > > > > +#endif > > > > > > Swap the above. Having the #ifndef is more confusing to understand than > > > to have a #ifdef. > > > > Will do. > > > > > > > > > + > > > > +#define FTRACE_ALIGN_DATA __attribute__((packed, > > > > aligned(FTRACE_ALIGNMENT))) > > > > > > Do we really need to pack it? I mean, just get rid of the hole (like > > > you did with the movement of the overrun) and shouldn't the array be > > > aligned normally without holes, if the arch can support it? Doesn't gcc > > > take care of that? > > > > I'm not sure I understood you correctly. AFAIK the size of struct is > > a multiple of alignment unit and gcc manual says the aligment > > attribute only can be increased unless the 'packed' is used as well.. > > Ah, I see you are trying to get the recorded size in the array down to > a 4 byte alignment (due to the "int depth"), instead of adding the 4 > bytes to the buffer. > > Hmm, I wondering if we need the ifdef above, as the ring buffer itself > will force the 8 byte alignment of structures added to the buffer. As far as I can see, the ring buffer has following code in ring_buffer.c: #define RB_ALIGNMENT 4U #define RB_MAX_SMALL_DATA (RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX) #define RB_EVNT_MIN_SIZE 8U /* two 32bit words */ #ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS # define RB_FORCE_8BYTE_ALIGNMENT 0 # define RB_ARCH_ALIGNMENTRB_ALIGNMENT #else # define RB_FORCE_8BYTE_ALIGNMENT 1 # define RB_ARCH_ALIGNMENT8U #endif #define RB_ALIGN_DATA __aligned(RB_ARCH_ALIGNMENT) Thanks, Namhyung
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
On Fri, 24 Jun 2016 15:35:44 +0900 Namhyung Kim wrote: > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > > > index dea12a6e413b..35c523ba5c59 100644 > > > --- a/include/linux/ftrace.h > > > +++ b/include/linux/ftrace.h > > > @@ -751,25 +751,33 @@ extern void ftrace_init(void); > > > static inline void ftrace_init(void) { } > > > #endif > > > > > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > > > +# define FTRACE_ALIGNMENT4 > > > +#else > > > +# define FTRACE_ALIGNMENT8 > > > +#endif > > > > Swap the above. Having the #ifndef is more confusing to understand than > > to have a #ifdef. > > Will do. > > > > > > + > > > +#define FTRACE_ALIGN_DATA__attribute__((packed, > > > aligned(FTRACE_ALIGNMENT))) > > > > Do we really need to pack it? I mean, just get rid of the hole (like > > you did with the movement of the overrun) and shouldn't the array be > > aligned normally without holes, if the arch can support it? Doesn't gcc > > take care of that? > > I'm not sure I understood you correctly. AFAIK the size of struct is > a multiple of alignment unit and gcc manual says the aligment > attribute only can be increased unless the 'packed' is used as well.. Ah, I see you are trying to get the recorded size in the array down to a 4 byte alignment (due to the "int depth"), instead of adding the 4 bytes to the buffer. Hmm, I wondering if we need the ifdef above, as the ring buffer itself will force the 8 byte alignment of structures added to the buffer. -- Steve
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
Hi Steve, On Thu, Jun 23, 2016 at 09:37:40AM -0400, Steven Rostedt wrote: > On Mon, 23 May 2016 00:26:15 +0900 > Namhyung Kim wrote: > > > Currently ftrace_graph_ent{,_entry} and ftrace_graph_ret{,_entry} struct > > can have padding bytes at the end due to alignment in 64-bit data type. > > As these data are recorded so frequently, those paddings waste > > non-negligible space. As some archs can have efficient unaligned > > accesses, reducing the alignment can save ~10% of data size: > > > > ftrace_graph_ent_entry: 24 -> 20 > > ftrace_graph_ret_entry: 48 -> 44 > > > > Also I moved the 'overrun' field in struct ftrace_graph_ret to minimize > > the padding. Tested on x86_64 only. > > I'd like to see this tested on other archs too. > > [ Added linux-arch so maybe other arch maintainers may know about this ] Thanks, it'd be great if anyone could try this. I think it doesn't affect most of (64-bit) archs since only x86_64, arm64 and powerpc define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS (and it turns off CONFIG_HAVE_64BIT_ALIGNED_ACCESS). So other archs still have (same) 8-byte alignment requirement. Do 32-bit archs really require 64-bit alignment for unsigned long long? IOW is it an alignment violation putting it in 32-bit boundary? > > > > > Signed-off-by: Namhyung Kim > > --- > > include/linux/ftrace.h | 16 > > kernel/trace/trace.h | 11 +++ > > kernel/trace/trace_entries.h | 4 ++-- > > 3 files changed, 25 insertions(+), 6 deletions(-) > > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > > index dea12a6e413b..35c523ba5c59 100644 > > --- a/include/linux/ftrace.h > > +++ b/include/linux/ftrace.h > > @@ -751,25 +751,33 @@ extern void ftrace_init(void); > > static inline void ftrace_init(void) { } > > #endif > > > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > > +# define FTRACE_ALIGNMENT 4 > > +#else > > +# define FTRACE_ALIGNMENT 8 > > +#endif > > Swap the above. Having the #ifndef is more confusing to understand than > to have a #ifdef. Will do. > > > + > > +#define FTRACE_ALIGN_DATA __attribute__((packed, > > aligned(FTRACE_ALIGNMENT))) > > Do we really need to pack it? I mean, just get rid of the hole (like > you did with the movement of the overrun) and shouldn't the array be > aligned normally without holes, if the arch can support it? Doesn't gcc > take care of that? I'm not sure I understood you correctly. AFAIK the size of struct is a multiple of alignment unit and gcc manual says the aligment attribute only can be increased unless the 'packed' is used as well.. Thanks, Namhyung > > -- Steve > > > + > > /* > > * Structure that defines an entry function trace. > > */ > > struct ftrace_graph_ent { > > unsigned long func; /* Current function */ > > int depth; > > -}; > > +} FTRACE_ALIGN_DATA; > > > > /* > > * Structure that defines a return function trace. > > */ > > struct ftrace_graph_ret { > > unsigned long func; /* Current function */ > > - unsigned long long calltime; > > - unsigned long long rettime; > > /* Number of functions that overran the depth limit for current task */ > > unsigned long overrun; > > + unsigned long long calltime; > > + unsigned long long rettime; > > int depth; > > -}; > > +} FTRACE_ALIGN_DATA; > > > > /* Type of the callback handlers for tracing function graph*/ > > typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* > > return */ > > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > > index 5167c366d6b7..d2dd49ca55ee 100644 > > --- a/kernel/trace/trace.h > > +++ b/kernel/trace/trace.h > > @@ -80,6 +80,12 @@ enum trace_type { > > FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > > filter) > > > > +#undef FTRACE_ENTRY_PACKED > > +#define FTRACE_ENTRY_PACKED(name, struct_name, id, tstruct, print, \ > > + filter) \ > > + FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > > +filter) FTRACE_ALIGN_DATA > > + > > #include "trace_entries.h" > > > > /* > > @@ -1600,6 +1606,11 @@ int set_tracer_flag(struct trace_array *tr, unsigned > > int mask, int enabled); > > #define FTRACE_ENTRY_DUP(call, struct_name, id, tstruct, print, filter) > > \ > > FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > > filter) > > +#undef FTRACE_ENTRY_PACKED > > +#define FTRACE_ENTRY_PACKED(call, struct_name, id, tstruct, print, filter) > > \ > > + FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > > +filter) > > + > > #include "trace_entries.h" > > > > #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) > > diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h > > index ee7b94a4810a..5c30efcda5e6 100644 > > --- a/kernel/trace/trace_entries.h > > +++ b/kernel/tra
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
On Mon, 23 May 2016 00:26:15 +0900 Namhyung Kim wrote: > Currently ftrace_graph_ent{,_entry} and ftrace_graph_ret{,_entry} struct > can have padding bytes at the end due to alignment in 64-bit data type. > As these data are recorded so frequently, those paddings waste > non-negligible space. As some archs can have efficient unaligned > accesses, reducing the alignment can save ~10% of data size: > > ftrace_graph_ent_entry: 24 -> 20 > ftrace_graph_ret_entry: 48 -> 44 > > Also I moved the 'overrun' field in struct ftrace_graph_ret to minimize > the padding. Tested on x86_64 only. I'd like to see this tested on other archs too. [ Added linux-arch so maybe other arch maintainers may know about this ] > > Signed-off-by: Namhyung Kim > --- > include/linux/ftrace.h | 16 > kernel/trace/trace.h | 11 +++ > kernel/trace/trace_entries.h | 4 ++-- > 3 files changed, 25 insertions(+), 6 deletions(-) > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > index dea12a6e413b..35c523ba5c59 100644 > --- a/include/linux/ftrace.h > +++ b/include/linux/ftrace.h > @@ -751,25 +751,33 @@ extern void ftrace_init(void); > static inline void ftrace_init(void) { } > #endif > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > +# define FTRACE_ALIGNMENT4 > +#else > +# define FTRACE_ALIGNMENT8 > +#endif Swap the above. Having the #ifndef is more confusing to understand than to have a #ifdef. > + > +#define FTRACE_ALIGN_DATA__attribute__((packed, > aligned(FTRACE_ALIGNMENT))) Do we really need to pack it? I mean, just get rid of the hole (like you did with the movement of the overrun) and shouldn't the array be aligned normally without holes, if the arch can support it? Doesn't gcc take care of that? -- Steve > + > /* > * Structure that defines an entry function trace. > */ > struct ftrace_graph_ent { > unsigned long func; /* Current function */ > int depth; > -}; > +} FTRACE_ALIGN_DATA; > > /* > * Structure that defines a return function trace. > */ > struct ftrace_graph_ret { > unsigned long func; /* Current function */ > - unsigned long long calltime; > - unsigned long long rettime; > /* Number of functions that overran the depth limit for current task */ > unsigned long overrun; > + unsigned long long calltime; > + unsigned long long rettime; > int depth; > -}; > +} FTRACE_ALIGN_DATA; > > /* Type of the callback handlers for tracing function graph*/ > typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return > */ > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > index 5167c366d6b7..d2dd49ca55ee 100644 > --- a/kernel/trace/trace.h > +++ b/kernel/trace/trace.h > @@ -80,6 +80,12 @@ enum trace_type { > FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ >filter) > > +#undef FTRACE_ENTRY_PACKED > +#define FTRACE_ENTRY_PACKED(name, struct_name, id, tstruct, print, \ > + filter) \ > + FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > + filter) FTRACE_ALIGN_DATA > + > #include "trace_entries.h" > > /* > @@ -1600,6 +1606,11 @@ int set_tracer_flag(struct trace_array *tr, unsigned > int mask, int enabled); > #define FTRACE_ENTRY_DUP(call, struct_name, id, tstruct, print, filter) > \ > FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ >filter) > +#undef FTRACE_ENTRY_PACKED > +#define FTRACE_ENTRY_PACKED(call, struct_name, id, tstruct, print, filter) \ > + FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > + filter) > + > #include "trace_entries.h" > > #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) > diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h > index ee7b94a4810a..5c30efcda5e6 100644 > --- a/kernel/trace/trace_entries.h > +++ b/kernel/trace/trace_entries.h > @@ -72,7 +72,7 @@ FTRACE_ENTRY_REG(function, ftrace_entry, > ); > > /* Function call entry */ > -FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, > +FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry, > > TRACE_GRAPH_ENT, > > @@ -88,7 +88,7 @@ FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, > ); > > /* Function return entry */ > -FTRACE_ENTRY(funcgraph_exit, ftrace_graph_ret_entry, > +FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry, > > TRACE_GRAPH_RET, >
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
On Wed, 22 Jun 2016 22:58:43 +0900 Namhyung Kim wrote: > Ping! > Ug, this got missed twice (still marked unread in my inbox). I'll take a look at this today. Thanks! -- Steve
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
Ping! On Tue, Jun 7, 2016 at 10:49 PM, Namhyung Kim wrote: > Hi Steve, > > Could you please take a look at this? > > Thanks, > Namhyung > > > On Mon, May 23, 2016 at 12:26 AM, Namhyung Kim wrote: >> Currently ftrace_graph_ent{,_entry} and ftrace_graph_ret{,_entry} struct >> can have padding bytes at the end due to alignment in 64-bit data type. >> As these data are recorded so frequently, those paddings waste >> non-negligible space. As some archs can have efficient unaligned >> accesses, reducing the alignment can save ~10% of data size: >> >> ftrace_graph_ent_entry: 24 -> 20 >> ftrace_graph_ret_entry: 48 -> 44 >> >> Also I moved the 'overrun' field in struct ftrace_graph_ret to minimize >> the padding. Tested on x86_64 only. >> >> Signed-off-by: Namhyung Kim >> --- >> include/linux/ftrace.h | 16 >> kernel/trace/trace.h | 11 +++ >> kernel/trace/trace_entries.h | 4 ++-- >> 3 files changed, 25 insertions(+), 6 deletions(-) >> >> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h >> index dea12a6e413b..35c523ba5c59 100644 >> --- a/include/linux/ftrace.h >> +++ b/include/linux/ftrace.h >> @@ -751,25 +751,33 @@ extern void ftrace_init(void); >> static inline void ftrace_init(void) { } >> #endif >> >> +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS >> +# define FTRACE_ALIGNMENT 4 >> +#else >> +# define FTRACE_ALIGNMENT 8 >> +#endif >> + >> +#define FTRACE_ALIGN_DATA __attribute__((packed, >> aligned(FTRACE_ALIGNMENT))) >> + >> /* >> * Structure that defines an entry function trace. >> */ >> struct ftrace_graph_ent { >> unsigned long func; /* Current function */ >> int depth; >> -}; >> +} FTRACE_ALIGN_DATA; >> >> /* >> * Structure that defines a return function trace. >> */ >> struct ftrace_graph_ret { >> unsigned long func; /* Current function */ >> - unsigned long long calltime; >> - unsigned long long rettime; >> /* Number of functions that overran the depth limit for current task >> */ >> unsigned long overrun; >> + unsigned long long calltime; >> + unsigned long long rettime; >> int depth; >> -}; >> +} FTRACE_ALIGN_DATA; >> >> /* Type of the callback handlers for tracing function graph*/ >> typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* >> return */ >> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h >> index 5167c366d6b7..d2dd49ca55ee 100644 >> --- a/kernel/trace/trace.h >> +++ b/kernel/trace/trace.h >> @@ -80,6 +80,12 @@ enum trace_type { >> FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ >> filter) >> >> +#undef FTRACE_ENTRY_PACKED >> +#define FTRACE_ENTRY_PACKED(name, struct_name, id, tstruct, print, \ >> + filter) \ >> + FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ >> +filter) FTRACE_ALIGN_DATA >> + >> #include "trace_entries.h" >> >> /* >> @@ -1600,6 +1606,11 @@ int set_tracer_flag(struct trace_array *tr, unsigned >> int mask, int enabled); >> #define FTRACE_ENTRY_DUP(call, struct_name, id, tstruct, print, filter) >>\ >> FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ >> filter) >> +#undef FTRACE_ENTRY_PACKED >> +#define FTRACE_ENTRY_PACKED(call, struct_name, id, tstruct, print, filter) \ >> + FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ >> +filter) >> + >> #include "trace_entries.h" >> >> #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) >> diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h >> index ee7b94a4810a..5c30efcda5e6 100644 >> --- a/kernel/trace/trace_entries.h >> +++ b/kernel/trace/trace_entries.h >> @@ -72,7 +72,7 @@ FTRACE_ENTRY_REG(function, ftrace_entry, >> ); >> >> /* Function call entry */ >> -FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, >> +FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry, >> >> TRACE_GRAPH_ENT, >> >> @@ -88,7 +88,7 @@ FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, >> ); >> >> /* Function return entry */ >> -FTRACE_ENTRY(funcgraph_exit, ftrace_graph_ret_entry, >> +FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry, >> >> TRACE_GRAPH_RET, >> >> -- >> 2.8.0 >> > > > > -- > Thanks, > Namhyung
Re: [RFC/PATCH] ftrace: Reduce size of function graph entries
Hi Steve, Could you please take a look at this? Thanks, Namhyung On Mon, May 23, 2016 at 12:26 AM, Namhyung Kim wrote: > Currently ftrace_graph_ent{,_entry} and ftrace_graph_ret{,_entry} struct > can have padding bytes at the end due to alignment in 64-bit data type. > As these data are recorded so frequently, those paddings waste > non-negligible space. As some archs can have efficient unaligned > accesses, reducing the alignment can save ~10% of data size: > > ftrace_graph_ent_entry: 24 -> 20 > ftrace_graph_ret_entry: 48 -> 44 > > Also I moved the 'overrun' field in struct ftrace_graph_ret to minimize > the padding. Tested on x86_64 only. > > Signed-off-by: Namhyung Kim > --- > include/linux/ftrace.h | 16 > kernel/trace/trace.h | 11 +++ > kernel/trace/trace_entries.h | 4 ++-- > 3 files changed, 25 insertions(+), 6 deletions(-) > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > index dea12a6e413b..35c523ba5c59 100644 > --- a/include/linux/ftrace.h > +++ b/include/linux/ftrace.h > @@ -751,25 +751,33 @@ extern void ftrace_init(void); > static inline void ftrace_init(void) { } > #endif > > +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS > +# define FTRACE_ALIGNMENT 4 > +#else > +# define FTRACE_ALIGNMENT 8 > +#endif > + > +#define FTRACE_ALIGN_DATA __attribute__((packed, > aligned(FTRACE_ALIGNMENT))) > + > /* > * Structure that defines an entry function trace. > */ > struct ftrace_graph_ent { > unsigned long func; /* Current function */ > int depth; > -}; > +} FTRACE_ALIGN_DATA; > > /* > * Structure that defines a return function trace. > */ > struct ftrace_graph_ret { > unsigned long func; /* Current function */ > - unsigned long long calltime; > - unsigned long long rettime; > /* Number of functions that overran the depth limit for current task > */ > unsigned long overrun; > + unsigned long long calltime; > + unsigned long long rettime; > int depth; > -}; > +} FTRACE_ALIGN_DATA; > > /* Type of the callback handlers for tracing function graph*/ > typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return > */ > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > index 5167c366d6b7..d2dd49ca55ee 100644 > --- a/kernel/trace/trace.h > +++ b/kernel/trace/trace.h > @@ -80,6 +80,12 @@ enum trace_type { > FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > filter) > > +#undef FTRACE_ENTRY_PACKED > +#define FTRACE_ENTRY_PACKED(name, struct_name, id, tstruct, print, \ > + filter) \ > + FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > +filter) FTRACE_ALIGN_DATA > + > #include "trace_entries.h" > > /* > @@ -1600,6 +1606,11 @@ int set_tracer_flag(struct trace_array *tr, unsigned > int mask, int enabled); > #define FTRACE_ENTRY_DUP(call, struct_name, id, tstruct, print, filter) > \ > FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > filter) > +#undef FTRACE_ENTRY_PACKED > +#define FTRACE_ENTRY_PACKED(call, struct_name, id, tstruct, print, filter) \ > + FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ > +filter) > + > #include "trace_entries.h" > > #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) > diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h > index ee7b94a4810a..5c30efcda5e6 100644 > --- a/kernel/trace/trace_entries.h > +++ b/kernel/trace/trace_entries.h > @@ -72,7 +72,7 @@ FTRACE_ENTRY_REG(function, ftrace_entry, > ); > > /* Function call entry */ > -FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, > +FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry, > > TRACE_GRAPH_ENT, > > @@ -88,7 +88,7 @@ FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, > ); > > /* Function return entry */ > -FTRACE_ENTRY(funcgraph_exit, ftrace_graph_ret_entry, > +FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry, > > TRACE_GRAPH_RET, > > -- > 2.8.0 > -- Thanks, Namhyung
[RFC/PATCH] ftrace: Reduce size of function graph entries
Currently ftrace_graph_ent{,_entry} and ftrace_graph_ret{,_entry} struct can have padding bytes at the end due to alignment in 64-bit data type. As these data are recorded so frequently, those paddings waste non-negligible space. As some archs can have efficient unaligned accesses, reducing the alignment can save ~10% of data size: ftrace_graph_ent_entry: 24 -> 20 ftrace_graph_ret_entry: 48 -> 44 Also I moved the 'overrun' field in struct ftrace_graph_ret to minimize the padding. Tested on x86_64 only. Signed-off-by: Namhyung Kim --- include/linux/ftrace.h | 16 kernel/trace/trace.h | 11 +++ kernel/trace/trace_entries.h | 4 ++-- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index dea12a6e413b..35c523ba5c59 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -751,25 +751,33 @@ extern void ftrace_init(void); static inline void ftrace_init(void) { } #endif +#ifndef CONFIG_HAVE_64BIT_ALIGNED_ACCESS +# define FTRACE_ALIGNMENT 4 +#else +# define FTRACE_ALIGNMENT 8 +#endif + +#define FTRACE_ALIGN_DATA __attribute__((packed, aligned(FTRACE_ALIGNMENT))) + /* * Structure that defines an entry function trace. */ struct ftrace_graph_ent { unsigned long func; /* Current function */ int depth; -}; +} FTRACE_ALIGN_DATA; /* * Structure that defines a return function trace. */ struct ftrace_graph_ret { unsigned long func; /* Current function */ - unsigned long long calltime; - unsigned long long rettime; /* Number of functions that overran the depth limit for current task */ unsigned long overrun; + unsigned long long calltime; + unsigned long long rettime; int depth; -}; +} FTRACE_ALIGN_DATA; /* Type of the callback handlers for tracing function graph*/ typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */ diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 5167c366d6b7..d2dd49ca55ee 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -80,6 +80,12 @@ enum trace_type { FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ filter) +#undef FTRACE_ENTRY_PACKED +#define FTRACE_ENTRY_PACKED(name, struct_name, id, tstruct, print, \ + filter) \ + FTRACE_ENTRY(name, struct_name, id, PARAMS(tstruct), PARAMS(print), \ +filter) FTRACE_ALIGN_DATA + #include "trace_entries.h" /* @@ -1600,6 +1606,11 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled); #define FTRACE_ENTRY_DUP(call, struct_name, id, tstruct, print, filter) \ FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ filter) +#undef FTRACE_ENTRY_PACKED +#define FTRACE_ENTRY_PACKED(call, struct_name, id, tstruct, print, filter) \ + FTRACE_ENTRY(call, struct_name, id, PARAMS(tstruct), PARAMS(print), \ +filter) + #include "trace_entries.h" #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) diff --git a/kernel/trace/trace_entries.h b/kernel/trace/trace_entries.h index ee7b94a4810a..5c30efcda5e6 100644 --- a/kernel/trace/trace_entries.h +++ b/kernel/trace/trace_entries.h @@ -72,7 +72,7 @@ FTRACE_ENTRY_REG(function, ftrace_entry, ); /* Function call entry */ -FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, +FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry, TRACE_GRAPH_ENT, @@ -88,7 +88,7 @@ FTRACE_ENTRY(funcgraph_entry, ftrace_graph_ent_entry, ); /* Function return entry */ -FTRACE_ENTRY(funcgraph_exit, ftrace_graph_ret_entry, +FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry, TRACE_GRAPH_RET, -- 2.8.0