Older kernels (e.g., RHEL6) do system call tracing via
syscalls:sys_{enter,exit} rather than raw_syscalls.
Update perf-scripts to fallback to syscalls in the
lack of raw_syscall support.
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
---
tools/perf/scripts/perl/bin/failed-syscalls
the ftrace's
function_graph). In the traces that I read, the netconsole code never
broke the rt lock assumptions.
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
---
drivers/net/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index
On 05/01/2014 11:15 AM, Paul Gortmaker wrote:
Chances are the two real targets above were using the
same igb network driver. I wonder if it matters what
underlying nic is used for netconsole?
These three machines are using different drivers:
+--++
| Machine | Nic
On 04/10/2014 11:44 AM, Clark Williams wrote:
On Wed, 9 Apr 2014 15:19:22 -0400
Steven Rostedt rost...@goodmis.org wrote:
This patch is built on top of the two other patches that I posted
earlier, which should not be as controversial.
If you have any benchmark on large machines I would be
...@goodmis.org
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
---
kernel/sched/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bc1638b..0acf96b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
in the trace output:
0) == |
0) d... | smp_apic_timer_interrupt() {
This patch fixes this issue by printing the latency-format flags when
it is enable.
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
Reviewed-by: Luis Claudio R. Goncalves lgonc...@redhat.com
On 04/23/2015 05:21 PM, Thomas Gleixner wrote:
I know of a SMI event counter which is available on newer CPUs and
Intel promised to add a SMI cycle counter as well. I have no idea
whether that one ever materialized. PeterZ should know.
The turbostat shows how many SMIs happened during a
On 04/20/2015 02:59 PM, Clark Williams wrote:
That said, if the lack of a sysfs knob has been causing real problems,
let's make that happen.
I'll talk to the other RT-ers and get back to you on that. I suspect
most folks would like it just to not have to reboot while tuning, but
not sure
Il 29/05/2015 17:24, John Stultz ha scritto:
Thus this patch series tries to address this isssue, including
extending the leap-a-day test to catch this problem, as well
as other relevant fixups I found while working on the code.
This series has only had limited testing, so I wanted to send
Il 01/06/2015 18:42, Prarit Bhargava ha scritto:
Daniel, did you disable chronyd/ntpd? I've seen both failures if I leave
chronyd running.
P.
Prarit, John, that is it, chronyd was running.
- Daniel
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
Il 27/05/2015 20:09, John Stultz ha scritto:
Hrm.. Thanks for the report! Looks like this could happen on !NOHZ as
well, and is an artifact of the fact the leapsecond is being applied
by a timer.
Yes, I reproduced it on a system with nohz=off
-- Daniel
--
To unsubscribe from this list: send
By default, unbounded workqueues run on all CPUs, which includes
isolated CPUs. This patch avoids unbounded workqueues running on
isolated CPUs by default, keeping the current behavior when no
CPUs were isolated.
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
---
kernel/workqueue.c
By default, unbounded workqueues run on all CPUs, which includes
isolated CPUs. This patch avoids unbounded workqueues running on
isolated CPUs by default, keeping the current behavior when no
CPUs were isolated.
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
---
kernel/workqueue.c
output:
# trace-cmd record -f foo
filter must come after event
Signed-off-by: Daniel Bristot de Oliveira bris...@redhat.com
---
trace-record.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/trace-record.c b/trace-record.c
index 3e5def2..45826b6 100644
--- a/trace-record.c
.
This problem does not affect the irqsoff tracer because interruptions
are enabled before entering the idle loop.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Reviewed-by: Luis Claudio R. Goncalves <lgonc...@redhat.com>
---
kernel/sched/idle.c | 2 ++
1 file changed, 2 inserti
Borntraeger <borntrae...@de.ibm.com>
Cc: "Luis Claudio R. Goncalves" <lgonc...@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
kernel/sched/core.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sc
On 06/07/2016 04:30 PM, Tejun Heo wrote:
> Is this something in mainline? This forces all task free path to be
> irq-safe, which *could* be fine but it's weird to make cgroup free
> path irq-safe for something which isn't in mainline.
you mean, mainline linux kernel? if so, yes it is. I was
On 06/06/2016 06:03 PM, Peter Zijlstra wrote:
> >> +/*
> >> + * Tracepoint for priority changes of a task.
> >> + */
> >> +DEFINE_EVENT(sched_prio_template, sched_set_prio,
> >> + TP_PROTO(struct task_struct *tsk, int newprio),
> >> +
Ciao Juri,
On 06/07/2016 07:14 AM, Juri Lelli wrote:
> Interesting. And your test is using cpuset controller to partion
> DEADLINE tasks and then modify groups concurrently?
Yes. I was studying the partitioning/admission control of the
deadline scheduler, to document it.
I was using the minimal
Ciao Juri,
On 06/07/2016 10:30 AM, Juri Lelli wrote:
> So, this and the partitioned one could actually overlap, since we don't
> set cpu_exclusive. Is that right?
>
> I guess affinity mask of both m processes gets set correclty, but I'm
> not sure if we are missing one check in the admission
Oops.
While doing further tests on my patch I found a problem:
[ 82.390739] =
[ 82.390749] [ INFO: inconsistent lock state ]
[ 82.390759] 4.7.0-rc2+ #5 Not tainted
[ 82.390768] -
[ 82.390777] inconsistent {HARDIRQ-ON-W} ->
: Tejun Heo <t...@kernel.org>
Cc: Li Zefan <lize...@huawei.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Juri Lelli <juri.le...@arm.com>
Cc: cgro...@vger.kernel.org
Reviewed-by: Rik van Riel <r...@redhat.com>
Reviewed-by: "Luis Claudio R. Goncalves" <lgonc..
: Tejun Heo <t...@kernel.org>
Cc: Li Zefan <lize...@huawei.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Juri Lelli <juri.le...@arm.com>
Cc: cgro...@vger.kernel.org
Reviewed-by: Rik van Riel <r...@redhat.com>
Reviewed-by: "Luis Claudio R. Goncalves" <lgonc..
On 05/31/2016 04:23 PM, Josh Triplett wrote:
Hi Josh,
> Sorry, realized something else a moment after sending: I don't think
> this will build if you use the tiny RCU implementation. That
> implementation *does* support tracing, and if you enable tracing,
> you'll have
lo <a...@kernel.org>
Tested-by: "Luis Claudio R. Goncalves" <lgonc...@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
Documentation/sysctl/kernel.txt | 12
include/linux/kernel.h | 1 +
kernel/rcu/tree.c |
On 06/01/2016 06:45 AM, Peter Zijlstra wrote:
> Do we really need more panic_on_* knobs? Can't we re-purpose
> panic_on_warn for this?
I think this case is very specific, specific enough to deserve its own
sysctl. But I see your point, and the possibilities I can see are:
1) convert the
adead.org>
Reviewed-by: Arnaldo Carvalho de Melo <a...@kernel.org>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
Documentation/sysctl/kernel.txt | 13 +
include/linux/kernel.h | 1 +
kernel/sched/core.c | 7 +++
kernel/sysct
the kernel to include the panic() call. For instance
when supporting enterprise users.
Daniel Bristot de Oliveira (2):
rcu: sysctl: Panic on RCU Stall
sched: sysctl: Panic on scheduling while atomic
Documentation/sysctl/kernel.txt | 25 +
include/linux/kernel.h
is Claudio R. Goncalves" <lgonc...@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
Documentation/sysctl/kernel.txt | 12
include/linux/kernel.h | 1 +
kernel/rcu/tree.c | 8
kernel/sysctl.c
On 06/16/2016 07:14 PM, Tejun Heo wrote:
> Except that the patch seems to use irqsave/restore instead of plain
> irq ones in places. Care to update those?
Hi Tejun,
The use of the irq spin_(un)lock_irq() assumes that the code is always
called with IRQs enabled. But that is not always true in
On 06/07/2016 05:05 PM, Daniel Bristot de Oliveira wrote:
> On 06/07/2016 04:30 PM, Tejun Heo wrote:
>> Is this something in mainline? This forces all task free path to be
>> irq-safe, which *could* be fine but it's weird to make cgroup free
>> path irq-safe for something wh
Hi Tejun,
On 06/17/2016 02:36 AM, Tejun Heo wrote:
> Please use _irq and _irqsave
> appropriately depending on the circumstances.
ack! I will do it!
Cooking a v3:
- using _irq and _irqsave appropriately, and
- using raw_spin locks functions.
Thanks! -- Daniel
.org
Reviewed-by: Rik van Riel <r...@redhat.com>
Reviewed-by: "Luis Claudio R. Goncalves" <lgonc...@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
Changes from v2:
Use spin_lock_irq() where we know that IRQs are enabled
Changes from v1:
A
Hi
On 06/17/2016 04:59 PM, Daniel Bristot de Oliveira wrote:
> - using _irq and _irqsave appropriately, and
> - using raw_spin locks functions.
After some patches/tests on -rt, I figured that there is a -rt specific
patch that moves cgroup_free() calls to the non-atomic context (in t
On 02/23/2016 07:44 AM, Peter Zijlstra wrote:
>>> Worse, the proposed tracepoints are atrocious, look at crap like this:
>>> > >
> > > +if (trace_sched_deadline_yield_enabled()) {
> > > +u64 delta_exec = rq_clock_task(rq) -
> > > p->se.exec_start;
On 02/22/2016 02:48 PM, Steven Rostedt wrote:
> On Mon, 22 Feb 2016 18:32:59 +0100
> Peter Zijlstra wrote:
>
>
>> > So I'm a bit allergic to tracepoints and this is very flimsy on reasons
>> > why I would want to do this.
> Because there's no way to know if
On 02/23/2016 07:44 AM, Peter Zijlstra wrote:
> Now ideally we'd do something like the below, but because trainwreck, we
> cannot actually do this I think :-(
Some other considerations:
1) The majority of tasks run on NORMAL scheduler with default nice. So,
prev=NORMAL:{0,0,0} and
Move dl_task_of(), dl_rq_of_se() and rq_of_dl_rq() helper functions
from kernel/sched/deadline.c to kernel/sched/sched.h, so they
can be used on other scheduler files.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
kernel/sched/deadline.c | 18 --
kernel
-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
tools/lib/traceevent/event-parse.c | 4
1 file changed, 4 insertions(+)
diff --git a/tools/lib/traceevent/event-parse.c
b/tools/lib/traceevent/event-parse.c
index c3bd294..575e751 100644
--- a/tools/lib/traceevent/event-parse.c
task called
sched_yield(), and will wait for the next period.
- sched:sched_deadline_throttle: Informs that a task consumed all its
available runtime and was throttled.
- sched:sched_deadline_block: Informs that a deadline task went to sleep
waiting to be awakened by another task.
Daniel Bristot
: sched_deadline_block:\
now=276.228295889 deadline=276.258262555
remaining_runtime=1996
The task b-1611 blocked waiting for an external event. Its deadline is at
276.258262555, and it stills have 1996 ns of remaining runtime on the
current period.
Signed-off-by: Daniel Bristot de
From: "Steven Rostedt (Red Hat)"
To have nanosecond output displayed in a more human readable format, its
nicer to convert it to a seconds format (XXX.Y). The problem is that
to do so, the numbers must be divided by NSEC_PER_SEC, and moded too. But as
these numbers
On 02/26/2016 10:54 AM, Sebastian Andrzej Siewior wrote:
> - trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
> + trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
If !lock_functions(CALLER_ADDR0), the start/stop_critical_timing() will
be called with
On 02/15/2016 08:18 AM, Juri Lelli wrote:
> Do you think we could also skip some of the
> following updates/accounting in this case? Not sure we win anything by
> doing that, though.
I reviewed rostedt's patch and the following updates/accounting
operations. I agree with rostedt's patch, and
ization. It can
also cause the hang the system.
This patch adds the quirk for this device, which causes the delay
to disappear. It is named as "USB Keykoard2" because the "USB Keykoard"
already exists.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
---
On 03/29/2016 12:16 PM, Peter Zijlstra wrote:
>> +trace_sched_deadline_yield(>curr->dl);
ouch, it should be trace_sched_deadline_yield(dl_se). It works
as is, but it is really very sad, my bad, sorry.
>> >dl_se->dl_throttled = 1;
>> > +
On 03/29/2016 04:09 PM, Moore, Robert wrote:
> Actually, I did in fact put that there to break up the output after the
> tables are loaded. Is this a problem?
Well, I do not believe that there is a real problem on it.
On the other hand, it does not seem to be common to have blank lines in
the
On 03/29/2016 12:57 PM, Steven Rostedt wrote:
> Peter Zijlstra <pet...@infradead.org> wrote:
>
>> > On Mon, Mar 28, 2016 at 01:50:51PM -0300, Daniel Bristot de Oliveira wrote:
>>> > > @@ -733,7 +738,9 @@ static void update_curr_dl(struct
On 03/29/2016 02:13 PM, Steven Rostedt wrote:
>> -0 [007] d..3 78377.688969: sched_switch:
>> prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=b
>> next_pid=18973 next_prio=-1
>> >b-18973 [007] d..3 78377.688979: sched_deadline_block:
>> >
On 03/29/2016 05:29 PM, Steven Rostedt wrote:
>>> Yes, we don't want to get rid of the old one. But it shouldn't break
>>> > > anything if we extend it. I'm thinking of extending it with a dynamic
>>> > > array to store the deadline task values (runtime, period). And for non
>>> > > deadline
:
Cleanup in the sched:sched_deadline_yield tracepoint
Fix compilantion warning on Intel 32 bits
Daniel Bristot de Oliveira (2):
sched: Move deadline container_of() helper functions into sched.h
sched/deadline: Tracepoints for deadline scheduler
Steven Rostedt (Red Hat) (1):
tracing: Add
: sched_deadline_block:\
now=276.228295889 deadline=276.258262555
remaining_runtime=1996
The task b-1611 blocked waiting for an external event. Its deadline is at
276.258262555, and it stills have 1996 ns of remaining runtime on the
current period.
Signed-off-by: Daniel Bristot de
Move dl_task_of(), dl_rq_of_se() and rq_of_dl_rq() helper functions
from kernel/sched/deadline.c to kernel/sched/sched.h, so they
can be used on other scheduler files.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
diff --git a/kernel/sched/deadline.c b/kernel/sched/dead
ned-off-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
index 170c93b..e9c3f93 100644
--- a/include/trace/trace_events.h
+++ b/include/trace/trace_events.
20160108
ACPI: 2 ACPI AML tables successfully acquired and loaded
Security Framework initialized
Kernel log after this patch:
ACPI: Core revision 20160108
ACPI: 2 ACPI AML tables successfully acquired and loaded
Security Framework initialized
Signed-off-
On 04/19/2016 11:34 AM, Steven Rostedt wrote:
> This code adds the event-fork option that, when set, will have tasks
> with their PIDs in set_event_pid add their children PIDs when they
> fork. It will also remove their PID from the file on exit.
That is a nice feature! I tested it and it works.
d by tracing_cpumask.
Hi!
I tested this patchset in a system which I can cause SMIs. The results
are consistent with the latency I see when I run cyclictest in this box
and cause SMIs on it. The tracer will be more accurate, as expected. So:
Tested-by: Daniel Bristot de Oliveira <bris...@redhat.com>
R
On 07/06/2016 10:53 AM, Julien Desfossez wrote:
>> But still, it's a
>> > rather hefty tracepoint (lots of fields), probably want to keep from
>> > adding comm too.
> Yes, I agree we can remove the comm field, it is easy to get from the
> previous sched_switch.
>
Sorry for the delay. I do liked
On 07/01/2016 08:44 PM, Daniel Bristot de Oliveira wrote:
> This patch series fixes a problem on printk:console tracepoint
> that prints a blank line in the trace output after each printk
> message that finishes with '\n'.
>
> It also does some cleanup on __get_str() usage, that
&g
On 06/03/2016 05:10 PM, Daniel Bristot de Oliveira wrote:
> Currently, a schedule while atomic error prints the stack trace to the
> kernel log and the system continue running.
>
> Although it is possible to collect the kernel log messages and analyze
> it, often more informa
er:17 ts:1470352543.990077507
>
> Signed-off-by: Steven Rostedt <rost...@goodmis.org>
It worked fine in a system that I can manually cause SMIs (by turning
keyboard's backlight on and off).
Tested-by: Daniel Bristot de Oliveira <bris...@redhat.com>
-- Daniel
__get_str(msg) does not need (char *) operator overloading to access
mgs's elements anymore. This patch substitutes
((char *)__get_str(msg))[0] usage to __get_str(msg)[0].
It is just a code cleanup, no changes on tracepoint ABI.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.
This patch series fixes a problem on printk:console tracepoint
that prints a blank line in the trace output after each printk
message that finishes with '\n'.
It also does some cleanup on __get_str() usage, that
was found while fixing the printk:console tracepoint.
Daniel Bristot de Oliveira (4
, idProduct=02d5
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Reviewed-by: Steven Rostedt <rost...@goodmis.org>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: linux-kernel@vger.kernel.org
---
include/trace/events/print
__get_str(str)'s definition includes a (char *) operator
overloading that is not protected with outer ().
This patch adds () around __get_str()'s definition, enabling
some code cleanup.
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Daniel Bristot de Oliveira
Use __get_str(str) rather than __get_dynamic_array(str) when
deadling with strings.
It is just a code cleanup, no changes on tracepoint ABI.
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Reviewed-by: Steven R
ment took place in the wrong instant, the next replenishment
will also be held in a wrong instant of time. Rather than occurring in
the nth period away from the first activation, it is taking place
in the (nth period - relative deadline).
Signed-off-by: Daniel Bristot de Oliveira <bris...@
lem is explained in the fix
description as well.
Daniel Bristot de Oliveira (2):
sched/deadline: Replenishment timer should fire in the next period
sched/deadline: Throttle a constrained deadline task activated after
the deadline
kernel/sched/deadline.c |
nanosleep(, NULL);
}
exit(0);
}
--- >% ---
On my box, this reproducer uses almost 50% of the CPU time, which is
obviously wrong for a task with 2/2000 reservation.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo M
On 02/14/2017 04:54 PM, Tommaso Cucinotta wrote:
> On 13/02/2017 20:05, Daniel Bristot de Oliveira wrote:
>> To avoid this problem, in the activation of a constrained deadline
>> task after the deadline but before the next period, throttle the
>> task and set the replenishi
On 02/15/2017 01:59 PM, Juri Lelli wrote:
> Actually, another thing that we noticed, talking on IRC with Peter, is
> that we seem to be replenishing differently on different occasions:
When a task is awakened (not by the replenishment timer), it is not
possible to know if the absolute deadline
On 02/15/2017 02:33 PM, Daniel Bristot de Oliveira wrote:
> dl_se->deadline = dl_se->deadline += pi_se->dl_period;
ops, it should be:
dl_se->deadline = dl_se->deadline + pi_se->dl_period;
On 02/13/2017 04:33 PM, Steven Rostedt wrote:
>> +static inline bool dl_is_constrained(struct sched_dl_entity *dl_se)
>> +{
>> +return dl_se->dl_runtime < dl_se->dl_period;
>> +}
>> +
> Is it ever appropriate for a dl task to have runtime == period? What
> purpose would that serve? Just run
On 02/13/2017 04:46 PM, Daniel Bristot de Oliveira wrote:
> On 02/13/2017 04:33 PM, Steven Rostedt wrote:
>>> +static inline bool dl_is_constrained(struct sched_dl_entity *dl_se)
>>> +{
>>> + return dl_se->dl_runtime < dl_se->dl_period;
>>>
ment took place in the wrong instant, the next replenishment
will also be held in a wrong instant of time. Rather than occurring in
the nth period away from the first activation, it is taking place
in the (nth period - relative deadline).
Signed-off-by: Daniel Bristot de Oliveira <bris...@redh
lem is explained in the fix
description as well.
Changes from V1:
- Fix a broken comment style.
- Fixes dl_is_constrained():
A constrained deadline task has dl_deadline < dl_period; so
"dl_runtime < dl_period"; s/runtime/deadline/
Daniel Bristot de Oliveira (2):
sched/dead
nanosleep(, NULL);
}
exit(0);
}
--- >% ---
On my box, this reproducer uses almost 50% of the CPU time, which is
obviously wrong for a task with 2/2000 reservation.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo M
odmis.org>
> Reviewed-by: Juri Lelli <juri.le...@arm.com>
+1
Reviewed-by: Daniel Bristot de Oliveira <bris...@redhat.com>
-- Daniel
t;rost...@goodmis.org>
> Reviewed-by: Juri Lelli <juri.le...@arm.com>
+1
Reviewed-by: Daniel Bristot de Oliveira <bris...@redhat.com>
-- Daniel
On 02/13/2017 12:12 PM, Peter Zijlstra wrote:
> On Fri, Feb 10, 2017 at 08:48:11PM +0100, Daniel Bristot de Oliveira wrote:
>> +/* During the activation, CBS checks if it can reuse the current task's
>> + * runtime and period. If the deadline of the task is in the past, CBS
> Br
eriod"; s/runtime/deadline/
Daniel Bristot de Oliveira (2):
sched/deadline: Replenishment timer should fire in the next period
sched/deadline: Throttle a constrained deadline task activated after
the deadline
Steven Rostedt (VMware) (1):
sched/deadline: Use deadline instead of p
en when the runtime and deadline are not the same.
Signed-off-by: Steven Rostedt (VMware) <rost...@goodmis.org>
Reviewed-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Juri Lelli <juri.le...@arm.com&g
ment took place in the wrong instant, the next replenishment
will also be held in a wrong instant of time. Rather than occurring in
the nth period away from the first activation, it is taking place
in the (nth period - relative deadline).
Signed-off-by: Daniel Bristot de Oliveira <bris...@redh
nanosleep(, NULL);
}
exit(0);
}
--- >% ---
On my box, this reproducer uses almost 50% of the CPU time, which is
obviously wrong for a task with 2/2000 reservation.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo M
n "we
> have a top pi-waiter which is a SCHED_DEADLINE task" in that order. Also fix a
> typo that follows.
>
> Cc: Juri Lelli <juri.le...@arm.com>
> Signed-off-by: Joel Fernandes <joe...@google.com>
Reviewed-by: Daniel Bristot de Oliveira <bris...@redhat.com>
-- Daniel
On 02/28/2017 11:07 AM, Daniel Bristot de Oliveira wrote:
> + if (!pi_se->dl_throttled && dl_is_constrained(pi_se))
> + dl_check_constrained_dl(pi_se);
> +
me--, it should be >dl, not pi_se. This is not causing problems in
the test case because pi_s
task has dl_deadline < dl_period; so
"dl_runtime < dl_period"; s/runtime/deadline/
Daniel Bristot de Oliveira (2):
sched/deadline: Replenishment timer should fire in the next period
sched/deadline: Throttle a constrained deadline task activated after
the deadline
S
nanosleep(, NULL);
}
exit(0);
}
--- >% ---
On my box, this reproducer uses almost 50% of the CPU time, which is
obviously wrong for a task with 2/2000 reservation.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo M
en when the runtime and deadline are not the same.
Signed-off-by: Steven Rostedt (VMware) <rost...@goodmis.org>
Reviewed-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Juri Lelli <juri.le...@arm.com&g
ment took place in the wrong instant, the next replenishment
will also be held in a wrong instant of time. Rather than occurring in
the nth period away from the first activation, it is taking place
in the (nth period - relative deadline).
Signed-off-by: Daniel Bristot de Oliveira <bris...@redh
On 09/09/2016 09:24 AM, luca abeni wrote:
> Ok, but the task is still throttled, right?
I see your point, but... it is important to keep the documentation sync
with the code, and the code/explanation can be simpler now... :-)
-- Daniel
On 09/09/2016 09:38 AM, luca abeni wrote:
> Then, the "since the remaining runtime goes to 0" part of my suggestion
> is wrong and the sentence should be rephrased in some other way.
>
> Or am I misunderstanding what you are saying
Ack, maybe I was not precise enough, sorry... I was just talking
On 09/09/2016 07:00 AM, luca abeni wrote:
> Maybe instead of saying that the task is suspended you can say that
> since the remaining runtime goes to 0 the task is immediately throttled,
> and will be able to execute again only after the time is equal to the
> scheduling deadline (as explained in
On 11/07/2016 07:30 PM, Steven Rostedt wrote:
>> I'm still reviewing the patch, but I have to wonder why bother with making
>> it a scheduler feature?
>> >
>> > The SCHED_FIFO definition allows a fifo thread to starve others
>> > because a fifo task will run until it yields. Throttling was
On 11/08/2016 07:05 PM, Peter Zijlstra wrote:
>> >
>> > I know what we want to do, but there's some momentous problems that
>> > need to be solved first.
> Like what?
The problem is that using RT_RUNTIME_SHARE a CPU will almost always
borrow enough runtime to make a CPU intensive rt task to
ed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Reviewed-by: Steven Rostedt <rost...@goodmis.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Christoph Lameter <c...@lin
On 11/08/2016 08:50 PM, Peter Zijlstra wrote:
>> The problem is that using RT_RUNTIME_SHARE a CPU will almost always
>> > borrow enough runtime to make a CPU intensive rt task to run forever...
>> > well not forever, but until the system crash because a kworker starved
>> > in this CPU. Kworkers
Hi Tommaso,
On 11/07/2016 11:31 AM, Tommaso Cucinotta wrote:
> as anticipated live to Daniel:
> -) +1 for the general concept, we'd need something similar also for
> SCHED_DEADLINE
Resumed: the sum of the runtime of deadline tasks will not be greater
than the "to_ratio(global_rt_period(),
in the local CPU.
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Christoph Lameter <c...@linux.com>
Cc: Tommaso Cucinotta <tom
In the comment:
/*
* The task might have changed its scheduling policy to something
* different than SCHED_DEADLINE (through switched_fromd_dl()).
*/
s/switched_fromd_dl/switched_from_dl/
Signed-off-by: Daniel Bristot de Oliveira <bris...@redhat.com>
Cc
1 - 100 of 671 matches
Mail list logo