rcu_seq_snap may be tricky for someone looking at it for the first time.
Lets document how it works with an example to make it easier.
Signed-off-by: Joel Fernandes (Google)
---
v2 changes: Corrections as suggested by Randy.
kernel/rcu/rcu.h | 24 +++-
1 file changed, 23
As part of the gp_seq clean up, the Startleaf condition doesn't occur
anymore. Remove it from the comment in the trace event file.
Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org>
---
include/trace/events/rcu.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include
As part of the gp_seq clean up, the Startleaf condition doesn't occur
anymore. Remove it from the comment in the trace event file.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/trace/events/rcu.h b/include/trace
rcu_seq_snap may be tricky for someone looking at it for the first time.
Lets document how it works with an example to make it easier.
Signed-off-by: Joel Fernandes (Google) <j...@joelfernandes.org>
---
kernel/rcu/rcu.h | 23 ++-
1 file changed, 22 insertions(+), 1 de
rcu_seq_snap may be tricky for someone looking at it for the first time.
Lets document how it works with an example to make it easier.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu.h | 23 ++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/kernel
Hi,
Or maintain array of registered irqs and iterate over them only.
>>> Right, we can allocate a bitmap of used irqs to do that.
>>>
I have another idea.
perf record shows mutex_lock/mutex_unlock at the top.
Most of them are irq mutex not seqfile mutex as there are many
Hi,
Or maintain array of registered irqs and iterate over them only.
>>> Right, we can allocate a bitmap of used irqs to do that.
>>>
I have another idea.
perf record shows mutex_lock/mutex_unlock at the top.
Most of them are irq mutex not seqfile mutex as there are many
On Fri, Apr 6, 2018 at 5:58 AM, Morten Rasmussen
wrote:
> On Thu, Apr 05, 2018 at 06:22:48PM +0200, Vincent Guittot wrote:
>> Hi Morten,
>>
>> On 5 April 2018 at 17:46, Morten Rasmussen wrote:
>> > On Wed, Apr 04, 2018 at 03:43:17PM +0200,
On Fri, Apr 6, 2018 at 5:58 AM, Morten Rasmussen
wrote:
> On Thu, Apr 05, 2018 at 06:22:48PM +0200, Vincent Guittot wrote:
>> Hi Morten,
>>
>> On 5 April 2018 at 17:46, Morten Rasmussen wrote:
>> > On Wed, Apr 04, 2018 at 03:43:17PM +0200, Vincent Guittot wrote:
>> >> On 4 April 2018 at 12:44,
On Tue, Mar 27, 2018 at 6:27 AM, Mathieu Desnoyers
wrote:
>>> +static void find_tp(struct tracepoint *tp, void *priv)
>>> +{
>>> + struct tp_find_args *args = priv;
>>> +
>>> + if (!strcmp(tp->name, args->name)) {
>>> +
On Tue, Mar 27, 2018 at 6:27 AM, Mathieu Desnoyers
wrote:
>>> +static void find_tp(struct tracepoint *tp, void *priv)
>>> +{
>>> + struct tp_find_args *args = priv;
>>> +
>>> + if (!strcmp(tp->name, args->name)) {
>>> + WARN_ON_ONCE(args->tp);
>>> +
Hi Steve,
On Fri, Mar 23, 2018 at 8:02 AM, Steven Rostedt wrote:
> A while ago we had a boot tracer. But it was eventually removed:
> commit 30dbb20e68e6f ("tracing: Remove boot tracer").
>
> The rational was because there is already a initcall_debug boot option
> that
Hi Steve,
On Fri, Mar 23, 2018 at 8:02 AM, Steven Rostedt wrote:
> A while ago we had a boot tracer. But it was eventually removed:
> commit 30dbb20e68e6f ("tracing: Remove boot tracer").
>
> The rational was because there is already a initcall_debug boot option
> that causes printk()s of all
Hi Mathieu,
On Mon, Mar 26, 2018 at 12:10 PM, Mathieu Desnoyers
wrote:
> Provide an API allowing eBPF to lookup core kernel tracepoints by name.
>
> Given that a lookup by name explicitly requires tracepoint definitions
> to be unique for a given name (no
Hi Mathieu,
On Mon, Mar 26, 2018 at 12:10 PM, Mathieu Desnoyers
wrote:
> Provide an API allowing eBPF to lookup core kernel tracepoints by name.
>
> Given that a lookup by name explicitly requires tracepoint definitions
> to be unique for a given name (no duplicate keys), include a
>
On Tue, Feb 27, 2018 at 10:59 PM, Yisheng Xie wrote:
> ashmem_mutex may create a chain of dependencies like:
>
> CPU0CPU1
> mmap syscall ioctl syscall
> -> mmap_sem (acquired) -> ashmem_ioctl
>
On Tue, Feb 27, 2018 at 10:59 PM, Yisheng Xie wrote:
> ashmem_mutex may create a chain of dependencies like:
>
> CPU0CPU1
> mmap syscall ioctl syscall
> -> mmap_sem (acquired) -> ashmem_ioctl
> -> ashmem_mmap
Hi Steve,
On Sat, Oct 7, 2017 at 6:32 AM, Steven Rostedt <rost...@goodmis.org> wrote:
> On Fri, 6 Oct 2017 23:41:25 -0700
> "Joel Fernandes (Google)" <joel.open...@gmail.com> wrote:
>
>> Hi Steve,
>>
>> On Fri, Oct 6, 2017 at 11:07 AM, Stev
Hi Steve,
On Sat, Oct 7, 2017 at 6:32 AM, Steven Rostedt wrote:
> On Fri, 6 Oct 2017 23:41:25 -0700
> "Joel Fernandes (Google)" wrote:
>
>> Hi Steve,
>>
>> On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote:
>> > From: "Steven Rostedt (VMw
Hi Steve,
On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> The ftrace_mod_map is a descriptor to save module init function names in
> case they were traced, and the trace output needs to reference the function
Hi Steve,
On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> The ftrace_mod_map is a descriptor to save module init function names in
> case they were traced, and the trace output needs to reference the function
> name from the function address. But
Hi Byungchul,
On Thu, Aug 17, 2017 at 11:05 PM, Byungchul Park wrote:
> It would be better to avoid pushing tasks to other cpu within
> a SD_PREFER_SIBLING domain, instead, get more chances to check other
> siblings.
>
> Signed-off-by: Byungchul Park
Hi Byungchul,
On Thu, Aug 17, 2017 at 11:05 PM, Byungchul Park wrote:
> It would be better to avoid pushing tasks to other cpu within
> a SD_PREFER_SIBLING domain, instead, get more chances to check other
> siblings.
>
> Signed-off-by: Byungchul Park
> ---
> kernel/sched/deadline.c | 55
>
On Thu, Aug 17, 2017 at 6:25 PM, Byungchul Park wrote:
> On Mon, Aug 07, 2017 at 12:50:32PM +0900, Byungchul Park wrote:
>> When cpudl_find() returns any among free_cpus, the cpu might not be
>> closer than others, considering sched domain. For example:
>>
>>this_cpu:
On Thu, Aug 17, 2017 at 6:25 PM, Byungchul Park wrote:
> On Mon, Aug 07, 2017 at 12:50:32PM +0900, Byungchul Park wrote:
>> When cpudl_find() returns any among free_cpus, the cpu might not be
>> closer than others, considering sched domain. For example:
>>
>>this_cpu: 15
>>free_cpus: 0,
On Thu, Jul 27, 2017 at 12:55 PM, Saravana Kannan
wrote:
> On 07/26/2017 08:30 PM, Viresh Kumar wrote:
>>
>> On 26-07-17, 14:00, Saravana Kannan wrote:
>>>
>>> No, the alternative is to pass it on to the CPU freq driver and let it
>>> decide what it wants to do. That's the
On Thu, Jul 27, 2017 at 12:55 PM, Saravana Kannan
wrote:
> On 07/26/2017 08:30 PM, Viresh Kumar wrote:
>>
>> On 26-07-17, 14:00, Saravana Kannan wrote:
>>>
>>> No, the alternative is to pass it on to the CPU freq driver and let it
>>> decide what it wants to do. That's the whole point if having a
On Thu, Jul 27, 2017 at 12:21 AM, Juri Lelli wrote:
[..]
>> >
>> > But even without that, if you see the routine
>> > init_entity_runnable_average() in fair.c, the new tasks are
>> > initialized in a way that they are seen as heavy tasks. And so even
>> > for the first time
On Thu, Jul 27, 2017 at 12:21 AM, Juri Lelli wrote:
[..]
>> >
>> > But even without that, if you see the routine
>> > init_entity_runnable_average() in fair.c, the new tasks are
>> > initialized in a way that they are seen as heavy tasks. And so even
>> > for the first time they run, freq should
On Thu, Jul 27, 2017 at 12:14 AM, Viresh Kumar <viresh.ku...@linaro.org> wrote:
> On 26-07-17, 23:13, Joel Fernandes (Google) wrote:
>> On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar <viresh.ku...@linaro.org>
>> wrote:
>> > On 26-07-17, 22:34, Joel Fernandes
On Thu, Jul 27, 2017 at 12:14 AM, Viresh Kumar wrote:
> On 26-07-17, 23:13, Joel Fernandes (Google) wrote:
>> On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar
>> wrote:
>> > On 26-07-17, 22:34, Joel Fernandes (Google) wrote:
>> >> On Wed, Jul 26, 2017 at
Hi Viresh,
On Wed, Jul 26, 2017 at 10:46 PM, Viresh Kumar <viresh.ku...@linaro.org> wrote:
> On 26-07-17, 22:14, Joel Fernandes (Google) wrote:
>> Also one more comment about this usecase:
>>
>> You mentioned in our discussion at [2] sometime back, about the
>>
Hi Viresh,
On Wed, Jul 26, 2017 at 10:46 PM, Viresh Kumar wrote:
> On 26-07-17, 22:14, Joel Fernandes (Google) wrote:
>> Also one more comment about this usecase:
>>
>> You mentioned in our discussion at [2] sometime back, about the
>> question of initial utilizatio
On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar <viresh.ku...@linaro.org> wrote:
> On 26-07-17, 22:34, Joel Fernandes (Google) wrote:
>> On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar <viresh.ku...@linaro.org>
>> wrote:
>> > @@ -221,7 +226,7 @@ st
On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar wrote:
> On 26-07-17, 22:34, Joel Fernandes (Google) wrote:
>> On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar
>> wrote:
>> > @@ -221,7 +226,7 @@ static void sugov_update_single(struct
>> >
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
> This patch updates the schedutil governor to process cpufreq utilization
> update hooks called for remote CPUs where the remote CPU is managed by
> the cpufreq policy of the local CPU.
>
> Based on initial work from
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
> This patch updates the schedutil governor to process cpufreq utilization
> update hooks called for remote CPUs where the remote CPU is managed by
> the cpufreq policy of the local CPU.
>
> Based on initial work from Steve Muckle.
>
>
Hi Viresh,
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
> We do not call cpufreq callbacks from scheduler core for remote
> (non-local) CPUs currently. But there are cases where such remote
> callbacks are useful, specially in the case of shared cpufreq policies.
Hi Viresh,
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
> We do not call cpufreq callbacks from scheduler core for remote
> (non-local) CPUs currently. But there are cases where such remote
> callbacks are useful, specially in the case of shared cpufreq policies.
>
> This patch updates
Hi Viresh,
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
>
> With Android UI and benchmarks the latency of cpufreq response to
> certain scheduling events can become very critical. Currently, callbacks
> into schedutil are only made from the scheduler if the
Hi Viresh,
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
>
> With Android UI and benchmarks the latency of cpufreq response to
> certain scheduling events can become very critical. Currently, callbacks
> into schedutil are only made from the scheduler if the target CPU of the
> event is
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi
wrote:
> RINGBUF_TYPE_TIME_STAMP is defined but not used, and from what I can
> gather was reserved for something like an absolute timestamp feature
> for the ring buffer, if not a complete replacement of the current
>
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi
wrote:
> RINGBUF_TYPE_TIME_STAMP is defined but not used, and from what I can
> gather was reserved for something like an absolute timestamp feature
> for the ring buffer, if not a complete replacement of the current
> time_delta scheme.
>
> This code
Hi Tom,
Nice series and nice ELC talk as well. Thanks.
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi
wrote:
> This patchset adds support for 'inter-event' quantities to the trace
> event subsystem. The most important example of inter-event quantities
> are latencies,
Hi Tom,
Nice series and nice ELC talk as well. Thanks.
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi
wrote:
> This patchset adds support for 'inter-event' quantities to the trace
> event subsystem. The most important example of inter-event quantities
> are latencies, or the time differences
Hi Patrick,
On Thu, Mar 23, 2017 at 3:32 AM, Patrick Bellasi
wrote:
[..]
>> > which can be used to defined tunable root constraints when CGroups are
>> > not available, and becomes RO when CGroups are.
>> >
>> > Can this be eventually an acceptable option?
>> >
>> > In
Hi Patrick,
On Thu, Mar 23, 2017 at 3:32 AM, Patrick Bellasi
wrote:
[..]
>> > which can be used to defined tunable root constraints when CGroups are
>> > not available, and becomes RO when CGroups are.
>> >
>> > Can this be eventually an acceptable option?
>> >
>> > In any case I think that this
Hi Tejun,
>> That's also why the proposed interface has now been defined as a extension of
>> the CPU controller in such a way to keep a consistent view.
>>
>> This controller is already used by run-times like Android to "scope" apps by
>> constraining the amount of CPUs resource they are
Hi Tejun,
>> That's also why the proposed interface has now been defined as a extension of
>> the CPU controller in such a way to keep a consistent view.
>>
>> This controller is already used by run-times like Android to "scope" apps by
>> constraining the amount of CPUs resource they are
Hi,
On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
wrote:
> On 20-Mar 13:15, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Feb 28, 2017 at 02:38:38PM +, Patrick Bellasi wrote:
[..]
>> > These attributes:
>> > a) are tunable at all hierarchy levels, i.e. root group too
Hi,
On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
wrote:
> On 20-Mar 13:15, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Feb 28, 2017 at 02:38:38PM +, Patrick Bellasi wrote:
[..]
>> > These attributes:
>> > a) are tunable at all hierarchy levels, i.e. root group too
>>
>> This usually is
On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi
wrote:
> The CPU CGroup controller allows to assign a specified (maximum)
> bandwidth to tasks within a group, however it does not enforce any
> constraint on how such bandwidth can be consumed.
> With the integration of
On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi
wrote:
> The CPU CGroup controller allows to assign a specified (maximum)
> bandwidth to tasks within a group, however it does not enforce any
> constraint on how such bandwidth can be consumed.
> With the integration of schedutil, the scheduler
Hi Patrick,
On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi
wrote:
> Currently schedutil enforce a maximum OPP when RT/DL tasks are RUNNABLE.
> Such a mandatory policy can be made more tunable from userspace thus
> allowing for example to define a reasonable max
Hi Patrick,
On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi
wrote:
> Currently schedutil enforce a maximum OPP when RT/DL tasks are RUNNABLE.
> Such a mandatory policy can be made more tunable from userspace thus
> allowing for example to define a reasonable max capacity (i.e.
> frequency)
501 - 555 of 555 matches
Mail list logo