This patch makes sure blink hardware is disabled for selected GPIO. Blink
hardware is controled by GPO_BLINK register and is available for GPIOs from 0
to 31.
Signed-off-by: Vincent Donnefort vdonnef...@gmail.com
diff --git a/drivers/gpio/gpio-ich.c b/drivers/gpio/gpio-ich.c
index de3c317
This patch makes sure blink hardware is disabled for selected GPIO. Blink
hardware is controled by GPO_BLINK register and is available for GPIOs from 0
to 31.
Signed-off-by: Vincent Donnefort vdonnef...@gmail.com
---
Changes for v2:
- Rebased on for-next branch of linux-gpio git tree
From: Vincent Donnefort vdonnef...@gmail.com
Fix the following compilation warning:
drivers/irqchip/irq-armada-370-xp.c:55:23: warning: 'irq_controller_lock'
defined but not used [-Wunused-variable]
Signed-off-by: Vincent Donnefort vdonnef...@gmail.com
diff --git a/drivers/irqchip/irq-armada
From: Vincent Donnefort vdonnef...@gmail.com
Fix the following compilation warning:
drivers/irqchip/irq-armada-370-xp.c:55:23: warning: 'irq_controller_lock'
defined but not used [-Wunused-variable]
Signed-off-by: Vincent Donnefort vdonnef...@gmail.com
diff --git a/drivers/irqchip/irq-armada
:
This is almost certainly caused by the uninitialized regs ptr
in the ich6_desc struct (i3100_desc struct has the same problem)
introduced in this commit:
commit bb62a35bd5d96e506af0ea8dd145480b9172a2a6
Author: Vincent Donnefort vdonnef...@gmail.com
Date: Fri Feb 14 15:01:56
b667cf488aa9476b0ab64acd91f2a96f188cfd21 is the first bad commit
commit b667cf488aa9476b0ab64acd91f2a96f188cfd21
Author: Vincent Donnefort vdonnef...@gmail.com
Date: Fri Feb 7 14:21:05 2014 +0100
gpio: ich: Add support for multiple register addresses
This patch introduces regs and reglen pointers which
From: Vincent Donnefort vdonnef...@gmail.com
This patch fixes kernel NULL pointer BUG introduced by the following commit:
b667cf488aa9476b0ab64acd91f2a96f188cfd21
gpio: ich: Add support for multiple register addresses.
Signed-off-by: Vincent Donnefort vdonnef...@gmail.com
diff --git a/drivers
On Sat, Aug 23, 2014 at 01:24:56PM -0400, Tejun Heo wrote:
Hello,
On Fri, Aug 22, 2014 at 05:21:30PM -0700, Bryan Wu wrote:
On Tue, Aug 19, 2014 at 6:51 PM, Hugh Dickins hu...@google.com wrote:
On Tue, 19 Aug 2014, Vincent Donnefort wrote:
This patch introduces a work which take
This patch introduces a work which take care of reseting the blink workqueue and
avoid calling the cancel_delayed_work_sync function which may sleep, from an IRQ
context.
Signed-off-by: Vincent Donnefort vdonnef...@gmail.com
diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
index
Hugh,
Here's a patch which must fix your problem. It allows to call led_blink_set()
from on IRQ handler by adding a work to take care of the scheduling function
cancel_delayed_work_sync().
Regards,
Vincent.
Vincent Donnefort (1):
leds: make led_blink_set IRQ safe
drivers/leds/led-class.c
From: Vincent Donnefort
device_release() is freeing the resources before calling the device
specific release callback which is, in the case of devfreq, stopping
the governor.
It is a problem as some governors are using the device resources. e.g.
simpleondemand which is using the devfreq
From: Vincent Donnefort
device_release() is freeing the resources before calling the device
specific release callback which is, in the case of devfreq, stopping
the governor.
It is a problem as some governors are using the device resources. e.g.
simpleondemand which is using the devfreq
From: Vincent Donnefort
Fix the following compilation warning:
drivers/irqchip/irq-armada-370-xp.c:55:23: warning: 'irq_controller_lock'
defined but not used [-Wunused-variable]
Signed-off-by: Vincent Donnefort
diff --git a/drivers/irqchip/irq-armada-370-xp.c
b/drivers/irqchip/irq-armada
From: Vincent Donnefort
Fix the following compilation warning:
drivers/irqchip/irq-armada-370-xp.c:55:23: warning: 'irq_controller_lock'
defined but not used [-Wunused-variable]
Signed-off-by: Vincent Donnefort
diff --git a/drivers/irqchip/irq-armada-370-xp.c
b/drivers/irqchip/irq-armada
This patch makes sure blink hardware is disabled for selected GPIO. Blink
hardware is controled by GPO_BLINK register and is available for GPIOs from 0
to 31.
Signed-off-by: Vincent Donnefort
---
Changes for v2:
- Rebased on for-next branch of linux-gpio git tree
git
b6 14 1a 49 03 75 00 4c 89 4d c8 e8 ec
> RIP [] ichx_gpio_probe+0x28c/0x3d0 [gpio_ich]
> RSP
> CR2:
>
>
> This is almost certainly caused by the uninitialized regs ptr
> in the ich6_desc struct (i3100_desc struct has the same problem)
> introduced in thi
This patch introduces a work which take care of reseting the blink workqueue and
avoid calling the cancel_delayed_work_sync function which may sleep, from an IRQ
context.
Signed-off-by: Vincent Donnefort
diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
index 129729d..0971554
Hugh,
Here's a patch which must fix your problem. It allows to call led_blink_set()
from on IRQ handler by adding a work to take care of the scheduling function
cancel_delayed_work_sync().
Regards,
Vincent.
Vincent Donnefort (1):
leds: make led_blink_set IRQ safe
drivers/leds/led-class.c
On Wed, Jan 20, 2021 at 01:58:35PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 05:10:45PM +, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
> > triggered by
On Wed, Jan 20, 2021 at 06:53:33PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 20, 2021 at 06:45:16PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 11, 2021 at 05:10:46PM +, vincent.donnef...@arm.com wrote:
> > > @@ -475,6 +478,11 @@ cpuhp_set_state(struct cpuhp_cpu_state *st, enum
> > >
On Thu, Jan 21, 2021 at 03:57:03PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 05:10:47PM +, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > After the AP brought itself down to CPUHP_TEARDOWN_CPU, the BP will finish
> > the job. The
On Mon, Sep 21, 2020 at 06:36:02PM +0200, Peter Zijlstra wrote:
[...]
> +
> + [CPUHP_AP_SCHED_WAIT_EMPTY] = {
> + .name = "sched:waitempty",
> + .startup.single = NULL,
> + .teardown.single= sched_cpu_wait_empty,
> +
From: Vincent Donnefort
rq->cpu_capacity is a key element in several scheduler parts, such as EAS
task placement and load balancing. Tracking this value enables testing
and/or debugging by a toolkit.
Signed-off-by: Vincent Donnefort
diff --git a/include/linux/sched.h b/include/linux/sche
From: Vincent Donnefort
rq->cpu_capacity is a key element in several scheduler parts, such as EAS
task placement and load balancing. Tracking this value enables testing
and/or debugging by a toolkit.
Signed-off-by: Vincent Donnefort
diff --git a/include/linux/sched.h b/include/linux/sche
Hi Valentin,
On Thu, Dec 10, 2020 at 04:38:30PM +, Valentin Schneider wrote:
> Per-CPU kworkers forcefully migrated away by hotplug via
> workqueue_offline_cpu() can end up spawning more kworkers via
>
> manage_workers() -> maybe_create_worker()
>
> Workers created at this point will be
On Fri, Dec 11, 2020 at 01:13:35PM +, Valentin Schneider wrote:
> On 11/12/20 12:51, Valentin Schneider wrote:
> >> In that case maybe we should check for the cpu_active_mask here too ?
> >
> > Looking at it again, I think we might need to.
> >
> > IIUC you can end up with pools bound to a
Hi Peter,
[...]
> > >
> > > How about something like:
> > >
> > > #ifdef CONFIG_64BIT
> > >
> > > #define DEFINE_U64_U32(name) u64 name
> > > #define u64_u32_load(name)name
> > > #define u64_u32_store(name, val)name = val
> > >
> > > #else
> > >
> > > #define
From: Vincent Donnefort
The util_est signals are key elements for EAS task placement and
frequency selection. Having tracepoints to track these signals enables
load-tracking and schedutil testing and/or debugging by a toolkit.
Signed-off-by: Vincent Donnefort
diff --git a/include/trace/events
From: Vincent Donnefort
Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
factorize the u64 vminruntime and last_update_time read on a 32-bits
architecture. Those new helpers encapsulate smp_rmb() and smp_wmb()
synchronization and therefore, have a small penalty
Hi,
On Mon, Jul 27, 2020 at 01:24:54PM +0200, Ingo Molnar wrote:
>
> * vincent.donnef...@arm.com wrote:
>
> > From: Vincent Donnefort
> >
> > Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
> > factorize the u64 vminruntime and l
On Mon, Jul 27, 2020 at 02:38:01PM +0200, pet...@infradead.org wrote:
> On Mon, Jul 27, 2020 at 11:59:24AM +0100, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
> > fa
Hi,
On Tue, Jul 28, 2020 at 02:00:27PM +0200, pet...@infradead.org wrote:
> On Tue, Jul 28, 2020 at 01:13:02PM +0200, pet...@infradead.org wrote:
> > On Mon, Jul 27, 2020 at 04:23:03PM +0100, Vincent Donnefort wrote:
> >
> > > For 32-bit architectures, both min_vrunt
From: Vincent Donnefort
device_release() is freeing the resources before calling the device
specific release callback which is, in the case of devfreq, stopping
the governor.
It is a problem as some governors are using the device resources. e.g.
simpleondemand which is using the devfreq
From: Vincent Donnefort
device_release() is freeing the resources before calling the device
specific release callback which is, in the case of devfreq, stopping
the governor.
It is a problem as some governors are using the device resources. e.g.
simpleondemand which is using the devfreq
On Sat, Aug 23, 2014 at 01:24:56PM -0400, Tejun Heo wrote:
> Hello,
>
> On Fri, Aug 22, 2014 at 05:21:30PM -0700, Bryan Wu wrote:
> > On Tue, Aug 19, 2014 at 6:51 PM, Hugh Dickins wrote:
> > > On Tue, 19 Aug 2014, Vincent Donnefort wrote:
> > >
> > >>
This patch makes sure blink hardware is disabled for selected GPIO. Blink
hardware is controled by GPO_BLINK register and is available for GPIOs from 0
to 31.
Signed-off-by: Vincent Donnefort
diff --git a/drivers/gpio/gpio-ich.c b/drivers/gpio/gpio-ich.c
index de3c317..181828f 100644
; > bisect and found:
> >
> > $ git bisect good
> > b667cf488aa9476b0ab64acd91f2a96f188cfd21 is the first bad commit
> > commit b667cf488aa9476b0ab64acd91f2a96f188cfd21
> > Author: Vincent Donnefort
> > Date: Fri Feb 7 14:21:05 2014 +0100
> >
> >
From: Vincent Donnefort
This patch fixes kernel NULL pointer BUG introduced by the following commit:
b667cf488aa9476b0ab64acd91f2a96f188cfd21
gpio: ich: Add support for multiple register addresses.
Signed-off-by: Vincent Donnefort
diff --git a/drivers/gpio/gpio-ich.c b/drivers/gpio/gpio-ich.c
From: Vincent Donnefort
This patch-set intends mainly to fix HP rollback, which is currently broken,
due to an inconsistent "state" usage and an issue with CPUHP_AP_ONLINE_IDLE.
It also improves the "fail" interface, which can now be reset and will reject
CPUHP_BRINGUP_CP
From: Vincent Donnefort
Currently, the only way of resetting this file is to actually try to run
a hotplug, hotunplug or both. This is quite annoying for testing and, as
the default value for this file is -1, it seems quite natural to let a
user write it.
Signed-off-by: Vincent Donnefort
From: Vincent Donnefort
The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
triggered by the CPUHP_BRINGUP_CPU step. If the latter doesn't run, none
of the atomic can. Hence, rollback is not possible after a hotunplug
CPUHP_BRINGUP_CPU step failure and the "fail"
From: Vincent Donnefort
After the AP brought itself down to CPUHP_TEARDOWN_CPU, the BP will finish
the job. The steps left are as followed:
++
| CPUHP_TEARDOWN_CPU | -> If fails state is CPUHP_TEARDOWN_CPU
++
| ATOMIC STATES| ->
From: Vincent Donnefort
Factorizing and unifying cpuhp callback range invocations, especially for
the hotunplug path, where two different ways of decrementing were used. The
first one, decrements before the callback is called:
cpuhp_thread_fun()
state = st->state;
st->
On Thu, Feb 25, 2021 at 04:26:50PM +0100, Vincent Guittot wrote:
> On Mon, 22 Feb 2021 at 10:24, Vincent Donnefort
> wrote:
> >
> > On Fri, Feb 19, 2021 at 11:48:28AM +0100, Vincent Guittot wrote:
> > > On Tue, 16 Feb 2021 at 17:39, wrote:
> > &g
/hikey960.
Signed-off-by: Vincent Donnefort
Reviewed-by: Dietmar Eggemann
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9e4104ae39ae..214e02862994 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3966,24 +3966,27 @@ static inline void util_est_dequeue(struct cfs_rq
*cfs_rq
On Thu, Feb 25, 2021 at 12:45:06PM +0100, Dietmar Eggemann wrote:
> On 25/02/2021 09:36, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
>
> [...]
>
> > cpu_util_next() estimates the CPU utilization that would happen if the
> > task was placed on dst_
On Mon, Mar 01, 2021 at 06:21:23PM +0100, Peter Zijlstra wrote:
> On Mon, Mar 01, 2021 at 05:34:09PM +0100, Dietmar Eggemann wrote:
> > On 26/02/2021 09:41, Peter Zijlstra wrote:
> > > On Thu, Feb 25, 2021 at 04:58:20PM +, Vincent Donnefort wrote:
> > >
From: Vincent Donnefort
The sub_positive local version is saving an explicit load-store and is
enough for the cpu_util_next() usage.
Signed-off-by: Vincent Donnefort
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 146ac9fec4b6..1364f8b95214 100644
--- a/kernel/sched/fair.c
+++ b
From: Vincent Donnefort
find_energy_efficient_cpu() (feec()) computes for each perf_domain (pd) an
energy delta as follows:
feec(task)
for_each_pd
base_energy = compute_energy(task, -1, pd)
-> for_each_cpu(pd)
-> cpu_util_next(cpu, task, -1)
energy
From: Vincent Donnefort
Changelog since v1:
- Fix the issue in compute_energy(), as a change in cpu_util_next() would
break the OPP selection estimation.
- Separate patch for lsub_positive usage in cpu_util_next()
Vincent Donnefort (2):
sched/fair: Fix task utilization accountability
Hi Quentin,
On Mon, Feb 22, 2021 at 10:11:03AM +, Quentin Perret wrote:
> Hey Vincent,
>
> On Monday 22 Feb 2021 at 09:54:01 (+), vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > Currently, cpu_util_next() estimates the CPU utilization
is hence changed from O(n) to O(1). This also
speeds-up em_cpu_energy() even if no inefficient OPPs have been found.
Signed-off-by: Vincent Donnefort
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index 757fc60..90b9cb0 100644
--- a/include/linux/energy_model.h
+++ b
o,
despite not appearing in the statistics (the idle driver used here doesn't
report it), we can speculate that we also improve the cluster idle time.
[1] WFI: Wait for interrupt.
Vincent Donnefort (1):
PM / EM: Inefficient OPPs detection
include/linux/energy_model.h
On Thu, Apr 15, 2021 at 03:04:34PM +, Quentin Perret wrote:
> On Thursday 15 Apr 2021 at 15:12:08 (+0100), Vincent Donnefort wrote:
> > On Thu, Apr 15, 2021 at 01:12:05PM +, Quentin Perret wrote:
> > > Hi Vincent,
> > >
> > > On Thursday 08 Apr 2021 at
On Thu, Apr 15, 2021 at 02:59:54PM +, Quentin Perret wrote:
> On Thursday 15 Apr 2021 at 15:34:53 (+0100), Vincent Donnefort wrote:
> > On Thu, Apr 15, 2021 at 01:16:35PM +, Quentin Perret wrote:
> > > On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vince
On Thu, Apr 15, 2021 at 01:12:05PM +, Quentin Perret wrote:
> Hi Vincent,
>
> On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > Some SoCs, such as the sd855 have OPPs within the same performance domain,
> > whose cost is higher than others with a h
On Thu, Apr 15, 2021 at 01:16:35PM +, Quentin Perret wrote:
> On Thursday 08 Apr 2021 at 18:10:29 (+0100), Vincent Donnefort wrote:
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -10,6 +10,7 @@
> >
> > #incl
On Thu, Apr 15, 2021 at 03:32:11PM +0100, Valentin Schneider wrote:
> On 15/04/21 10:59, Peter Zijlstra wrote:
> > Can't make sense of what I did.. I've removed that hunk. Patch now looks
> > like this.
> >
>
> Small nit below, but regardless feel free to apply to the whole lot:
> Reviewed-by:
On Mon, Apr 19, 2021 at 11:56:30AM +0100, Vincent Donnefort wrote:
> On Thu, Apr 15, 2021 at 03:32:11PM +0100, Valentin Schneider wrote:
> > On 15/04/21 10:59, Peter Zijlstra wrote:
> > > Can't make sense of what I did.. I've removed that hunk. Patch now looks
> > > like
On Tue, Apr 20, 2021 at 04:58:00PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 20, 2021 at 04:39:04PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 20, 2021 at 04:20:56PM +0200, Peter Zijlstra wrote:
> > > On Tue, Apr 20, 2021 at 10:46:33AM +0100, Vincent Donnefort wrote:
&g
From: Vincent Donnefort
Factorizing and unifying cpuhp callback range invocations, especially for
the hotunplug path, where two different ways of decrementing were used. The
first one, decrements before the callback is called:
cpuhp_thread_fun()
state = st->state;
st->
From: Vincent Donnefort
This patch-set intends to unify steps call throughout hotplug and
hotunplug.
It also improves the "fail" interface, which can now be reset and will
reject states for which a failure can't be recovered.
v2:
- Reject all DEAD steps in the fail interface.
-
From: Vincent Donnefort
The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
triggered by the CPUHP_BRINGUP_CPU step. If the latter fails, no atomic
state can be rolled back.
DEAD callbacks too can't fail and disallow recovery. As a consequence,
during hotunplug, the fail
From: Vincent Donnefort
Currently, the only way of resetting the fail injection is to trigger a
hotplug, hotunplug or both. This is rather annoying for testing
and, as the default value for this file is -1, it seems pretty natural to
let a user write it.
Signed-off-by: Vincent Donnefort
diff
From: Vincent Donnefort
Being called for each dequeue, util_est reduces the number of its updates
by filtering out when the EWMA signal is different from the task util_avg
by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the
decay from a previous high util_avg, EWMA might
From: Vincent Donnefort
Currently, cpu_util_next() estimates the CPU utilization as follows:
max(cpu_util + task_util,
cpu_util_est + task_util_est)
This is an issue when making a comparison between CPUs, as the task
contribution can be either:
(1) task_util_est, on a mostly idle
On Fri, Feb 19, 2021 at 11:48:28AM +0100, Vincent Guittot wrote:
> On Tue, 16 Feb 2021 at 17:39, wrote:
> >
> > From: Vincent Donnefort
> >
> > Being called for each dequeue, util_est reduces the number of its updates
> > by filtering out when the EWMA signal is
On Fri, Feb 19, 2021 at 11:19:05AM +0100, Dietmar Eggemann wrote:
> On 16/02/2021 17:39, vincent.donnef...@arm.com wrote:
> > From: Vincent Donnefort
> >
> > Being called for each dequeue, util_est reduces the number of its updates
> > by filtering out when the EW
On Mon, Feb 22, 2021 at 12:23:04PM +, Quentin Perret wrote:
> On Monday 22 Feb 2021 at 11:36:03 (+), Vincent Donnefort wrote:
> > Here's with real life numbers.
> >
> > The task: util_avg=3 (1) util_est=11 (2)
> >
> > pd0 (CPU-0, CPU-1, CPU-2)
> &
On Mon, Feb 22, 2021 at 04:23:42PM +, Quentin Perret wrote:
> On Monday 22 Feb 2021 at 15:58:56 (+), Quentin Perret wrote:
> > But in any case, if we're going to address this, I'm still not sure this
> > patch will be what we want. As per my first comment we need to keep the
> > frequency
On Mon, Feb 22, 2021 at 03:58:56PM +, Quentin Perret wrote:
> On Monday 22 Feb 2021 at 15:01:51 (+), Vincent Donnefort wrote:
> > You mean that it could lead to a wrong frequency estimation when doing
> > freq = map_util_freq() in em_cpu_energy()?
>
> I'm
On Sat, Feb 03, 2024 at 07:33:51PM -0500, Steven Rostedt wrote:
> On Mon, 29 Jan 2024 14:27:58 +
> Vincent Donnefort wrote:
>
> > --- /dev/null
> > +++ b/include/uapi/linux/trace_mmap.h
> > @@ -0,0 +1,43 @@
> > +/* SPDX-License-Identifier: GPL-2.0 WITH L
trace_types_lock, a new spinlock is
introduced to serialize accesses to trace_array->snapshot. This intends
to allow access to that variable in a context where the mmap lock is
already held.
Signed-off-by: Vincent Donnefort
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2a7c6fd93
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Mapping will prevent snapshot and buffer size modifications.
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index
It is now possible to mmap() a ring-buffer to stream its content. Add
some documentation and a code example.
Signed-off-by: Vincent Donnefort
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 5092d6c13af5..0b300901fd75 100644
--- a/Documentation/trace/index.rst
their unique ID, assigned during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
Signed-off-by: Vincent Donnefort
diff
rder > 0 meta-page
* Add a new meta page field ->read
* Rename ring_buffer_meta_page_header into ring_buffer_meta_header
v1 -> v2:
* Hide data_pages from the userspace struct
* Fix META_PAGE_MAX_PAGES
* Support for order > 0 meta-page
* Add missing page->mapping.
Vincent Donnefort (6):
In preparation for the ring-buffer memory mapping where each subbuf will
be accessible to user-space, zero all the page allocations.
Signed-off-by: Vincent Donnefort
Reviewed-by: Masami Hiramatsu (Google)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index fd4bfe3ecf01
It is now possible to mmap() a ring-buffer to stream its content. Add
some documentation and a code example.
Signed-off-by: Vincent Donnefort
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 5092d6c13af5..0b300901fd75 100644
--- a/Documentation/trace/index.rst
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Mapping will prevent snapshot and buffer size modifications.
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index
In preparation for the ring-buffer memory mapping where each subbuf will
be accessible to user-space, zero all the page allocations.
Signed-off-by: Vincent Donnefort
Reviewed-by: Masami Hiramatsu (Google)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index fd4bfe3ecf01
trace_types_lock, a new spinlock is
introduced to serialize accesses to trace_array->snapshot. This intends
to allow access to that variable in a context where the mmap lock is
already held.
Signed-off-by: Vincent Donnefort
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2a7c6fd93
their unique ID, assigned during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
Signed-off-by: Vincent Donnefort
diff
_buffer_meta_header
v1 -> v2:
* Hide data_pages from the userspace struct
* Fix META_PAGE_MAX_PAGES
* Support for order > 0 meta-page
* Add missing page->mapping.
Vincent Donnefort (6):
ring-buffer: Zero ring-buffer sub-buffers
ring-buffer: Introducing ring-buffer mapping func
their unique ID, assigned during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
Signed-off-by: Vincent Donnefort
diff
lies removing order > 0 meta-page
* Add a new meta page field ->read
* Rename ring_buffer_meta_page_header into ring_buffer_meta_header
v1 -> v2:
* Hide data_pages from the userspace struct
* Fix META_PAGE_MAX_PAGES
* Support for order > 0 meta-page
* Add missing page->mapping.
Vincent D
In preparation for the ring-buffer memory mapping where each subbuf will
be accessible to user-space, zero all the page allocations.
Signed-off-by: Vincent Donnefort
Reviewed-by: Masami Hiramatsu (Google)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index fd4bfe3ecf01
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Mapping will prevent snapshot and buffer size modifications.
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index
trace_types_lock, a new spinlock is
introduced to serialize accesses to trace_array->snapshot. This intends
to allow access to that variable in a context where the mmap lock is
already held.
Signed-off-by: Vincent Donnefort
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8198bfc54
It is now possible to mmap() a ring-buffer to stream its content. Add
some documentation and a code example.
Signed-off-by: Vincent Donnefort
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 5092d6c13af5..0b300901fd75 100644
--- a/Documentation/trace/index.rst
[...]
> > +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer)
> > +{
> > + struct trace_buffer_meta *meta = cpu_buffer->meta_page;
> > +
> > + meta->reader.read = cpu_buffer->reader_page->read;
> > + meta->reader.id = cpu_buffer->reader_page->id;
> > +
GE_MAX_PAGES
* Support for order > 0 meta-page
* Add missing page->mapping.
Vincent Donnefort (6):
ring-buffer: Zero ring-buffer sub-buffers
ring-buffer: Introducing ring-buffer mapping functions
tracing: Add snapshot refcount
tracing: Allow user-space mapping of the ring-buffe
In preparation for the ring-buffer memory mapping where each subbuf will
be accessible to user-space, zero all the page allocations.
Signed-off-by: Vincent Donnefort
Reviewed-by: Masami Hiramatsu (Google)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index fd4bfe3ecf01
their unique ID, assigned during the
first mapping.
Once mapped, no subbuf can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping subbufs.
Signed-off-by: Vincent Donnefort
diff
trace_types_lock, a new spinlock is
introduced to serialize accesses to trace_array->snapshot. This intends
to allow access to that variable in a context where the mmap lock is
already held.
Signed-off-by: Vincent Donnefort
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2a7c6fd93
L_GET_READER. This will update the Meta-page reader ID to
point to the next reader containing unread data.
Mapping will prevent snapshot and buffer size modifications.
Signed-off-by: Vincent Donnefort
diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h
index
It is now possible to mmap() a ring-buffer to stream its content. Add
some documentation and a code example.
Signed-off-by: Vincent Donnefort
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 5092d6c13af5..0b300901fd75 100644
--- a/Documentation/trace/index.rst
+ map->bpage_size * subbuf);
while (kbuf->curr < read)
kbuffer_next_event(kbuf, NULL);
read_page(tep, kbuf);
}
munmap(data, data_len);
munmap(meta, page_size);
clo
following their unique ID, assigned during the
first mapping.
Once mapped, no bpage can get in or out of the ring-buffer: the buffer
size will remain unmodified and the splice enabling functions will in
reality simply memcpy the data instead of swapping the buffer pages.
Signed-off-by: Vincent Donnefort
> 0 and a struct buffer_page (often refered as "bpage")
already exists. We have then an unnecessary duplicate subbuffer ==
bpage.
Remove all references to sub-buffer and replace them with either bpage
or ring_buffer_page.
Signed-off-by: Vincent Donnefort
---
I forgot this patch
1 - 100 of 211 matches
Mail list logo