On Wed, 2016-04-20 at 15:13 -0400, Paul Gortmaker wrote:
> The recently added Kconfig controlling compilation of this code is:
>
> lib/Kconfig:config SG_POOL
> lib/Kconfig:def_bool n
>
> ...meaning that it currently is not being built as a module by anyone.
>
> Lets remove the modular code
On Wed, 2016-04-20 at 15:13 -0400, Paul Gortmaker wrote:
> The recently added Kconfig controlling compilation of this code is:
>
> lib/Kconfig:config SG_POOL
> lib/Kconfig:def_bool n
>
> ...meaning that it currently is not being built as a module by anyone.
>
> Lets remove the modular code
Linux 4.6-rc1 (2016-03-26 16:03:24 -0700)
are available in the git repository at:
https://github.com/anholt/linux tags/bcm2835-dt-next-2016-04-20
for you to fetch changes up to 896ad420db8d5ec4cc4727b786d15e28eb59b366:
dt/bindings: bcm2835: correct description for DMA-int (2016-04-19
Linux 4.6-rc1 (2016-03-26 16:03:24 -0700)
are available in the git repository at:
https://github.com/anholt/linux tags/bcm2835-dt-next-2016-04-20
for you to fetch changes up to 896ad420db8d5ec4cc4727b786d15e28eb59b366:
dt/bindings: bcm2835: correct description for DMA-int (2016-04-19
Linux 4.6-rc1 (2016-03-26 16:03:24 -0700)
are available in the git repository at:
https://github.com/anholt/linux tags/bcm2835-defconfig-next-2016-04-20
for you to fetch changes up to 3652bb35abf6ee11333cbec1d2855c1c0f9f6b27:
ARM: bcm2835: Enable NFS root support. (2016-04-04 11:03:30
This patchset aims to start a thread on cross-chips operations in DSA, no need
to spend time on reviewing the details of the code (especially for mv88e6xxx).
So when several switch chips are interconnected, we need to configure them all
to ensure correct hardware switching. We can think about
Linux 4.6-rc1 (2016-03-26 16:03:24 -0700)
are available in the git repository at:
https://github.com/anholt/linux tags/bcm2835-defconfig-next-2016-04-20
for you to fetch changes up to 3652bb35abf6ee11333cbec1d2855c1c0f9f6b27:
ARM: bcm2835: Enable NFS root support. (2016-04-04 11:03:30
This patchset aims to start a thread on cross-chips operations in DSA, no need
to spend time on reviewing the details of the code (especially for mv88e6xxx).
So when several switch chips are interconnected, we need to configure them all
to ensure correct hardware switching. We can think about
Instead of allowing any external frame to egress any internal port,
configure the Cross-chip Port VLAN Table (PVT) to forbid that.
When an external source port joins or leaves a bridge crossing this
switch, mask it in the PVT to allow or forbid frames to egress.
Add support for the cross-chip
Expand the Cross-chip Port Based VLAN Table initilization code, and make
sure the "5 Bit Port" bit is cleared.
This commit doesn't make any functional change to the current code.
Signed-off-by: Vivien Didelot
---
drivers/net/dsa/mv88e6xxx.c | 48
When multiple switch chips are chained together, one needs to know about
the bridge membership of others. For instance, switches like Marvell
6352 have cross-chip port-based VLAN table to allow or forbid cross-chip
frames to egress.
Add a cross_chip_bridge DSA driver function, used to notify a
Instead of allowing any external frame to egress any internal port,
configure the Cross-chip Port VLAN Table (PVT) to forbid that.
When an external source port joins or leaves a bridge crossing this
switch, mask it in the PVT to allow or forbid frames to egress.
Add support for the cross-chip
Expand the Cross-chip Port Based VLAN Table initilization code, and make
sure the "5 Bit Port" bit is cleared.
This commit doesn't make any functional change to the current code.
Signed-off-by: Vivien Didelot
---
drivers/net/dsa/mv88e6xxx.c | 48 -
When multiple switch chips are chained together, one needs to know about
the bridge membership of others. For instance, switches like Marvell
6352 have cross-chip port-based VLAN table to allow or forbid cross-chip
frames to egress.
Add a cross_chip_bridge DSA driver function, used to notify a
On Wed, Apr 20, 2016 at 8:28 PM, Srinivas Pandruvada
wrote:
> On Wed, 2016-04-20 at 15:59 +0200, Peter Zijlstra wrote:
>> On Sun, Apr 17, 2016 at 03:02:59PM -0700, Srinivas Pandruvada wrote:
>> >
>> > Skylake processor supports a new set of RAPL registers for
On Wed, Apr 20, 2016 at 8:28 PM, Srinivas Pandruvada
wrote:
> On Wed, 2016-04-20 at 15:59 +0200, Peter Zijlstra wrote:
>> On Sun, Apr 17, 2016 at 03:02:59PM -0700, Srinivas Pandruvada wrote:
>> >
>> > Skylake processor supports a new set of RAPL registers for
>> > controlling
>> > entire SoC
On Sat, Apr 16, 2016 at 03:01:57AM +0300, Giedrius Statkevičius wrote:
> Properly return rv back to the caller in the case of an error in
> parse_arg. In the process remove a unused variable 'out'.
The initial problem if I recall was value being uninitialized. Is that correct?
>
>
On Sat, Apr 16, 2016 at 03:01:57AM +0300, Giedrius Statkevičius wrote:
> Properly return rv back to the caller in the case of an error in
> parse_arg. In the process remove a unused variable 'out'.
The initial problem if I recall was value being uninitialized. Is that correct?
>
>
On 04/20/2016 12:17 AM, Davidlohr Bueso wrote:
locking/pvqspinlock: Robustify init_qspinlock_stat()
Specifically around the debugfs file creation calls,
I have no idea if they could ever possibly fail, but
this is core code (debug aside) so lets at least
check the return value and inform
On 04/20/2016 12:17 AM, Davidlohr Bueso wrote:
locking/pvqspinlock: Robustify init_qspinlock_stat()
Specifically around the debugfs file creation calls,
I have no idea if they could ever possibly fail, but
this is core code (debug aside) so lets at least
check the return value and inform
This patch applies on top of:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/rfc
---
For qspinlocks on ARM64, we would like to use WFE instead
of purely spinning. Qspinlocks internally have lock
contenders spin on an MCS lock.
Update arch_mcs_spin_lock_contended() such
This patch applies on top of:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/rfc
---
For qspinlocks on ARM64, we would like to use WFE instead
of purely spinning. Qspinlocks internally have lock
contenders spin on an MCS lock.
Update arch_mcs_spin_lock_contended() such
Hi all,
Updates for stable-security kernels have been released:
- v3.12.58-security
- v3.14.67-security
- v3.18.31-security
- v4.1.22-security
- v4.4.8-security
- v4.5.2-security
They are available at:
Hi all,
Updates for stable-security kernels have been released:
- v3.12.58-security
- v3.14.67-security
- v3.18.31-security
- v4.1.22-security
- v4.4.8-security
- v4.5.2-security
They are available at:
From: Michal Hocko
compaction code is doing weird dances between
COMPACT_FOO -> int -> unsigned long
but there doesn't seem to be any reason for that. All functions which
return/use one of those constants are not expecting any other value
so it really makes sense to define an
From: Michal Hocko
compaction code is doing weird dances between
COMPACT_FOO -> int -> unsigned long
but there doesn't seem to be any reason for that. All functions which
return/use one of those constants are not expecting any other value
so it really makes sense to define an enum for them and
From: Michal Hocko
__alloc_pages_direct_compact communicates potential back off by two
variables:
- deferred_compaction tells that the compaction returned
COMPACT_DEFERRED
- contended_compaction is set when there is a contention on
zone->lock
From: Michal Hocko
__alloc_pages_direct_compact communicates potential back off by two
variables:
- deferred_compaction tells that the compaction returned
COMPACT_DEFERRED
- contended_compaction is set when there is a contention on
zone->lock resp.
From: Michal Hocko
try_to_compact_pages can currently return COMPACT_SKIPPED even when the
compaction is defered for some zone just because zone DMA is skipped
in 99% of cases due to watermark checks. This makes COMPACT_DEFERRED
basically unusable for the page allocator as a
From: Michal Hocko
try_to_compact_pages can currently return COMPACT_SKIPPED even when the
compaction is defered for some zone just because zone DMA is skipped
in 99% of cases due to watermark checks. This makes COMPACT_DEFERRED
basically unusable for the page allocator as a feedback mechanism.
From: Michal Hocko
"mm: consider compaction feedback also for costly allocation" has
removed the upper bound for the reclaim/compaction retries based on the
number of reclaimed pages for costly orders. While this is desirable
the patch did miss a mis interaction between reclaim,
From: Michal Hocko
__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress
From: Michal Hocko
"mm: consider compaction feedback also for costly allocation" has
removed the upper bound for the reclaim/compaction retries based on the
number of reclaimed pages for costly orders. While this is desirable
the patch did miss a mis interaction between reclaim, compaction and
From: Michal Hocko
__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from a
From: Michal Hocko
should_reclaim_retry will give up retries for higher order allocations
if none of the eligible zones has any requested or higher order pages
available even if we pass the watermak check for order-0. This is done
because there is no guarantee that the
From: Michal Hocko
should_reclaim_retry will give up retries for higher order allocations
if none of the eligible zones has any requested or higher order pages
available even if we pass the watermak check for order-0. This is done
because there is no guarantee that the reclaimable and currently
From: Michal Hocko
PAGE_ALLOC_COSTLY_ORDER retry logic is mostly handled inside
should_reclaim_retry currently where we decide to not retry after at
least order worth of pages were reclaimed or the watermark check for at
least one zone would succeed after reclaiming all pages if
From: Michal Hocko
THP requests skip the direct reclaim if the compaction is either
deferred or contended to reduce stalls which wouldn't help the
allocation success anyway. These checks are ignoring other potential
feedback modes which we have available now.
It clearly doesn't
From: Michal Hocko
PAGE_ALLOC_COSTLY_ORDER retry logic is mostly handled inside
should_reclaim_retry currently where we decide to not retry after at
least order worth of pages were reclaimed or the watermark check for at
least one zone would succeed after reclaiming all pages if the reclaim
From: Michal Hocko
THP requests skip the direct reclaim if the compaction is either
deferred or contended to reduce stalls which wouldn't help the
allocation success anyway. These checks are ignoring other potential
feedback modes which we have available now.
It clearly doesn't make much sense
From: Michal Hocko
wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do
From: Michal Hocko
wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
From: Michal Hocko
compaction_result will be used as the primary feedback channel for
compaction users. At the same time try_to_compact_pages (and potentially
others) assume a certain ordering where a more specific feedback takes
precendence. This gets a bit awkward when we have
From: Michal Hocko
while playing with the oom detection rework [1] I have noticed
that my heavy order-9 (hugetlb) load close to OOM ended up in an
endless loop where the reclaim hasn't made any progress but
did_some_progress didn't reflect that and compaction_suitable
was
From: Michal Hocko
while playing with the oom detection rework [1] I have noticed
that my heavy order-9 (hugetlb) load close to OOM ended up in an
endless loop where the reclaim hasn't made any progress but
did_some_progress didn't reflect that and compaction_suitable
was backing off because no
From: Michal Hocko
compaction_result will be used as the primary feedback channel for
compaction users. At the same time try_to_compact_pages (and potentially
others) assume a certain ordering where a more specific feedback takes
precendence. This gets a bit awkward when we have conflicting
From: Michal Hocko
the compiler is complaining after "mm, compaction: change COMPACT_
constants into enum"
mm/compaction.c: In function ‘compact_zone’:
mm/compaction.c:1350:2: warning: enumeration value ‘COMPACT_DEFERRED’ not
handled in switch [-Wswitch]
switch (ret) {
^
Hi,
This is v6 of the series. The previous version was posted [1]. The
code hasn't changed much since then. I have found one old standing
bug (patch 1) which just got much more severe and visible with this
series. Other than that I have reorganized the series and put the
compaction feedback
From: Michal Hocko
Compaction can provide a wild variation of feedback to the caller. Many
of them are implementation specific and the caller of the compaction
(especially the page allocator) shouldn't be bound to specifics of the
current implementation.
This patch abstracts
From: Michal Hocko
COMPACT_COMPLETE now means that compaction and free scanner met. This is
not very useful information if somebody just wants to use this feedback
and make any decisions based on that. The current caller might be a poor
guy who just happened to scan tiny portion
From: Michal Hocko
the compiler is complaining after "mm, compaction: change COMPACT_
constants into enum"
mm/compaction.c: In function ‘compact_zone’:
mm/compaction.c:1350:2: warning: enumeration value ‘COMPACT_DEFERRED’ not
handled in switch [-Wswitch]
switch (ret) {
^
Hi,
This is v6 of the series. The previous version was posted [1]. The
code hasn't changed much since then. I have found one old standing
bug (patch 1) which just got much more severe and visible with this
series. Other than that I have reorganized the series and put the
compaction feedback
From: Michal Hocko
Compaction can provide a wild variation of feedback to the caller. Many
of them are implementation specific and the caller of the compaction
(especially the page allocator) shouldn't be bound to specifics of the
current implementation.
This patch abstracts the feedback into
From: Michal Hocko
COMPACT_COMPLETE now means that compaction and free scanner met. This is
not very useful information if somebody just wants to use this feedback
and make any decisions based on that. The current caller might be a poor
guy who just happened to scan tiny portion of the zone and
On 04/11/2016 02:40 AM, Geert Uytterhoeven wrote:
> According to full-history-linux commit d3794f4fa7c3edc3 ("[PATCH] M68k
> update (part 25)"), port operations are allowed on m68k if CONFIG_ISA is
> defined.
>
> However, commit 153dcc54df826d2f ("[PATCH] mem driver: fix conditional
> on isa i/o
On 04/11/2016 02:40 AM, Geert Uytterhoeven wrote:
> According to full-history-linux commit d3794f4fa7c3edc3 ("[PATCH] M68k
> update (part 25)"), port operations are allowed on m68k if CONFIG_ISA is
> defined.
>
> However, commit 153dcc54df826d2f ("[PATCH] mem driver: fix conditional
> on isa i/o
On Sun, Apr 17, 2016 at 03:04:31PM -0500, serge.hal...@ubuntu.com wrote:
> From: Serge Hallyn
>
> We've calculated @len to be the bytes we need for '/..' entries from
> @kn_from to the common ancestor, and calculated @nlen to be the extra
> bytes we need to get from the
On Sun, Apr 17, 2016 at 03:04:31PM -0500, serge.hal...@ubuntu.com wrote:
> From: Serge Hallyn
>
> We've calculated @len to be the bytes we need for '/..' entries from
> @kn_from to the common ancestor, and calculated @nlen to be the extra
> bytes we need to get from the common ancestor to
Hi Fabio,
On 19-04-2016 08:34, Fabio Estevam wrote:
On Mon, Apr 11, 2016 at 9:25 PM, Sergio Prado
wrote:
+ {
+ pinctrl-names = "default";
+ pinctrl-0 = <_enet>;
+ phy-mode = "rgmii";
+ phy-reset-gpios = < 31 GPIO_ACTIVE_HIGH>;
Are you
Hi Fabio,
On 19-04-2016 08:34, Fabio Estevam wrote:
On Mon, Apr 11, 2016 at 9:25 PM, Sergio Prado
wrote:
+ {
+ pinctrl-names = "default";
+ pinctrl-0 = <_enet>;
+ phy-mode = "rgmii";
+ phy-reset-gpios = < 31 GPIO_ACTIVE_HIGH>;
Are you sure this is really active
From: Steven Rostedt
Add the infrastructure needed to have the PIDs in set_event_pid to
automatically add PIDs of the children of the tasks that have their PIDs in
set_event_pid. This will also remove PIDs from set_event_pid when a task
exits
This is implemented by adding
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next
Head SHA1: d50c744ecde7ee3ba4d7ffb0e1c55e7a2f6bbc8e
Masami Hiramatsu (3):
kselftests/ftrace : Add event trigger testcases
kselftests/ftrace: Add hist trigger testcases
kselftests/ftrace: Add a
From: Steven Rostedt
Add the infrastructure needed to have the PIDs in set_event_pid to
automatically add PIDs of the children of the tasks that have their PIDs in
set_event_pid. This will also remove PIDs from set_event_pid when a task
exits
This is implemented by adding hooks into the fork
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
for-next
Head SHA1: d50c744ecde7ee3ba4d7ffb0e1c55e7a2f6bbc8e
Masami Hiramatsu (3):
kselftests/ftrace : Add event trigger testcases
kselftests/ftrace: Add hist trigger testcases
kselftests/ftrace: Add a
From: Tom Zanussi
Allow users to specify multiple trace event fields to use in keys by
allowing multiple fields in the 'keys=' keyword. With this addition,
any unique combination of any of the fields named in the 'keys'
keyword will result in a new entry being added
On Wed, 2016-04-20 at 12:30 +0200, Peter Zijlstra wrote:
> On Thu, Apr 14, 2016 at 12:13:38AM -0700, Jason Low wrote:
> > Use WFE to avoid most spinning with MCS spinlocks. This is implemented
> > with the new cmpwait() mechanism for comparing and waiting for the MCS
> > locked value to change
From: Steven Rostedt
The name "check_ignore_pid" is confusing in trying to figure out if the pid
should be ignored or not. Rename it to "ignore_this_task" which is pretty
straight forward, as a task (not a pid) is passed in, and should if true
should be ignored.
From: "Steven Rostedt (Red Hat)"
Add documentation to the ftrace.txt file in Documentation to describe the
event-fork option. Also add the missing "display-graph" option now that it
shows up in the trace_options file (from a previous commit).
Signed-off-by: Steven Rostedt
From: Tom Zanussi
Allow users to specify trace event fields to use in aggregated sums
via a new 'vals=' keyword. Before this addition, the only aggregated
sum supported was the implied value 'hitcount'. With this addition,
'hitcount' is also supported as an
From: Tom Zanussi
Allow users to specify multiple trace event fields to use in keys by
allowing multiple fields in the 'keys=' keyword. With this addition,
any unique combination of any of the fields named in the 'keys'
keyword will result in a new entry being added to the hash table.
Link:
On Wed, 2016-04-20 at 12:30 +0200, Peter Zijlstra wrote:
> On Thu, Apr 14, 2016 at 12:13:38AM -0700, Jason Low wrote:
> > Use WFE to avoid most spinning with MCS spinlocks. This is implemented
> > with the new cmpwait() mechanism for comparing and waiting for the MCS
> > locked value to change
From: Steven Rostedt
The name "check_ignore_pid" is confusing in trying to figure out if the pid
should be ignored or not. Rename it to "ignore_this_task" which is pretty
straight forward, as a task (not a pid) is passed in, and should if true
should be ignored.
Signed-off-by: Steven Rostedt
From: "Steven Rostedt (Red Hat)"
Add documentation to the ftrace.txt file in Documentation to describe the
event-fork option. Also add the missing "display-graph" option now that it
shows up in the trace_options file (from a previous commit).
Signed-off-by: Steven Rostedt
---
From: Tom Zanussi
Allow users to specify trace event fields to use in aggregated sums
via a new 'vals=' keyword. Before this addition, the only aggregated
sum supported was the implied value 'hitcount'. With this addition,
'hitcount' is also supported as an explicit value field, as is any
From: Steven Rostedt
In order to add the ability to let tasks that are filtered by the events
have their children also be traced on fork (and then not traced on exit),
convert the array into a pid bitmask. Most of the time the number of pids is
only 32768 pids or a 4k
From: Steven Rostedt
In order to add the ability to let tasks that are filtered by the events
have their children also be traced on fork (and then not traced on exit),
convert the array into a pid bitmask. Most of the time the number of pids is
only 32768 pids or a 4k bitmask, which is the same
From: "Steven Rostedt (Red Hat)"
The config option for TRACING_MAP has "default n", which is not needed
because the default of configs is 'n'.
Also, since the TRACING_MAP has no config prompt, there's no reason to
include "If in doubt, say N" in the help text.
Fixed a typo
From: Tom Zanussi
It's often useful to be able to use a stacktrace as a hash key, for
keeping a count of the number of times a particular call path resulted
in a trace event, for instance. Add a special key named 'stacktrace'
which can be used as key in a 'keys='
From: "Steven Rostedt (Red Hat)"
The config option for TRACING_MAP has "default n", which is not needed
because the default of configs is 'n'.
Also, since the TRACING_MAP has no config prompt, there's no reason to
include "If in doubt, say N" in the help text.
Fixed a typo in the comments of
From: Tom Zanussi
It's often useful to be able to use a stacktrace as a hash key, for
keeping a count of the number of times a particular call path resulted
in a trace event, for instance. Add a special key named 'stacktrace'
which can be used as key in a 'keys=' param for this purpose:
#
From: Tom Zanussi
If we assume the maximum size for a string field, we don't have to
worry about its position. Since we only allow two keys in a compound
key and having more than one string key in a given compound key
doesn't make much sense anyway, trading a bit of
From: Tom Zanussi
If we assume the maximum size for a string field, we don't have to
worry about its position. Since we only allow two keys in a compound
key and having more than one string key in a given compound key
doesn't make much sense anyway, trading a bit of extra space instead
of
From: Tom Zanussi
Named triggers are sets of triggers that share a common set of trigger
data. An example of functionality that could benefit from this type
of capability would be a set of inlined probes that would each
contribute event counts, for example, to a
From: Tom Zanussi
Named triggers are sets of triggers that share a common set of trigger
data. An example of functionality that could benefit from this type
of capability would be a set of inlined probes that would each
contribute event counts, for example, to a shared counter data
structure.
From: Masami Hiramatsu
Add a test for log2 modifier of hist trigger in hist_mod.tc.
Here is the test result.
# ./ftracetest test.d/trigger/trigger-hist-mod.tc
=== Ftrace unit tests ===
[1] event trigger - test histogram modifiers [PASS]
# of
From: Masami Hiramatsu
Add a test for log2 modifier of hist trigger in hist_mod.tc.
Here is the test result.
# ./ftracetest test.d/trigger/trigger-hist-mod.tc
=== Ftrace unit tests ===
[1] event trigger - test histogram modifiers [PASS]
# of passed: 1
# of failed: 0
# of
From: Masami Hiramatsu
This adds simple event trigger testcases for ftracetest,
which covers following triggers.
- traceon-traceoff trigger
- enable/disable_event trigger
- snapshot trigger
- stacktrace trigger
- trigger filters
Here is the test result.
From: Namhyung Kim
The string in a trace event is usually recorded as dynamic array which
is variable length. But current hist code only support fixed length
array so it cannot support most strings.
This patch fixes it by checking filter_type of the field and get
proper
From: Namhyung Kim
The string in a trace event is usually recorded as dynamic array which
is variable length. But current hist code only support fixed length
array so it cannot support most strings.
This patch fixes it by checking filter_type of the field and get
proper pointer with it. With
From: Masami Hiramatsu
This adds simple event trigger testcases for ftracetest,
which covers following triggers.
- traceon-traceoff trigger
- enable/disable_event trigger
- snapshot trigger
- stacktrace trigger
- trigger filters
Here is the test result.
# ./ftracetest
From: Tom Zanussi
Allow users to have common_pid field values displayed as program names
in the output by appending '.execname' to a common_pid field name:
# echo hist:keys=common_pid.execname ... \
[ if filter] > event/trigger
Link:
From: Tom Zanussi
Allow users to have common_pid field values displayed as program names
in the output by appending '.execname' to a common_pid field name:
# echo hist:keys=common_pid.execname ... \
[ if filter] > event/trigger
Link:
From: Masami Hiramatsu
Add the hist trigger testcases for ftracetest.
This checks the basic histogram trigger behaviors like as;
- Histogram trigger itself
- Histogram with string key
- Histogram with compound keys
- Histogram with sort key
- Histogram
From: Masami Hiramatsu
Add the hist trigger testcases for ftracetest.
This checks the basic histogram trigger behaviors like as;
- Histogram trigger itself
- Histogram with string key
- Histogram with compound keys
- Histogram with sort key
- Histogram trigger modifiers (execname, hex,
From: Tom Zanussi
Allow users to define any number of hist triggers per trace event.
Any number of hist triggers may be added for a given event, which may
differ by key, value, or filter.
Reading the event's 'hist' file will display the output of all the
hist
Em Wed, Apr 20, 2016 at 06:01:47PM +, Wang Nan escreveu:
> In bpf_program__load(), load ubpf program according to its engine type.
>
> API is improvemented to hold 'struct ubpf_vm *'.
>
> Signed-off-by: Wang Nan
> Cc: Arnaldo Carvalho de Melo
> Cc:
Em Wed, Apr 20, 2016 at 06:01:47PM +, Wang Nan escreveu:
> In bpf_program__load(), load ubpf program according to its engine type.
>
> API is improvemented to hold 'struct ubpf_vm *'.
>
> Signed-off-by: Wang Nan
> Cc: Arnaldo Carvalho de Melo
> Cc: Alexei Starovoitov
> Cc: Brendan Gregg
From: Tom Zanussi
Allow users to define any number of hist triggers per trace event.
Any number of hist triggers may be added for a given event, which may
differ by key, value, or filter.
Reading the event's 'hist' file will display the output of all the
hist triggers defined on an event
I messed up with the subject prefix, but this is v11, adds typecheck()
to patch 2.
2016-04-20 Gustavo Padovan :
> From: Gustavo Padovan
>
> struct sync_merge_data already have documentation on top of the
> struct definition. No need to
I messed up with the subject prefix, but this is v11, adds typecheck()
to patch 2.
2016-04-20 Gustavo Padovan :
> From: Gustavo Padovan
>
> struct sync_merge_data already have documentation on top of the
> struct definition. No need to duplicate it.
>
> Signed-off-by: Gustavo Padovan
>
301 - 400 of 1720 matches
Mail list logo