On Wed, Jan 09, 2019 at 11:20:50PM +0100, Heiner Kallweit wrote:
> On 28.12.2018 07:39, Heiner Kallweit wrote:
> > On 28.12.2018 07:34, Heiner Kallweit wrote:
> >> On 28.12.2018 02:31, Frederic Weisbecker wrote:
> >>> On Fri, Dec 28, 2018 at 12:11:12A
-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
---
kernel/locking/lockdep.c | 33 +++--
kernel/locking/lockdep_internals.h | 4
2 files changed, 15 insertions(+), 22 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
Just a few simplification and code cleanup.
Frederic Weisbecker (2):
locking/lockdep: Simplify mark_held_locks()
locking/lockdep: Provide enum lock_usage_bit mask names
kernel/locking/lockdep.c | 54 +-
kernel/locking/lockdep_internals.h | 4
The enum mark_type appears a bit artificial here. We can directly pass
the base enum lock_usage_bit value to mark_held_locks(). All we need
then is to add the read index for each lock if necessary. It makes the
code clearer.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Ingo Molnar
On Fri, Dec 28, 2018 at 12:11:12AM +0100, Heiner Kallweit wrote:
>
> OK, did as you advised and here comes the trace. That's the related dmesg
> part:
>
> [ 1479.025092] x86: Booting SMP configuration:
> [ 1479.025129] smpboot: Booting Node 0 Processor 1 APIC 0x2
> [ 1479.094715] NOHZ:
On Mon, Oct 15, 2018 at 10:58:54PM +0200, Heiner Kallweit wrote:
> On 28.09.2018 15:18, Frederic Weisbecker wrote:
> > On Thu, Sep 27, 2018 at 06:05:46PM +0200, Thomas Gleixner wrote:
> >> On Tue, 28 Aug 2018, Frederic Weisbecker wrote:
> >>> On Fri, Aug 24, 20
On Mon, Nov 26, 2018 at 05:11:03PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 26, 2018 at 04:53:54PM +0100, Frederic Weisbecker wrote:
> > > > + irq_work_queue_on(_cpu(vtime_set_nice_work, cpu), cpu);
> > >
> > > What happens if you already had
On Mon, Nov 26, 2018 at 05:11:03PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 26, 2018 at 04:53:54PM +0100, Frederic Weisbecker wrote:
> > > > + irq_work_queue_on(_cpu(vtime_set_nice_work, cpu), cpu);
> > >
> > > What happens if you already had
On Tue, Nov 20, 2018 at 03:17:54PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:46:03AM +0100, Frederic Weisbecker wrote:
> > On the vtime level, nice updates are currently handled on context
> > switches. When a task's nice value gets updated while it is sleeping,
On Tue, Nov 20, 2018 at 03:17:54PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:46:03AM +0100, Frederic Weisbecker wrote:
> > On the vtime level, nice updates are currently handled on context
> > switches. When a task's nice value gets updated while it is sleeping,
On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
> On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
> > Right, so the whole bandwidth thing becomes a pain; the simplest
> > solution is to detect the throttle at task-pick time, dequeue and try
> > again. But that is indeed quite
On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
> On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
> > Right, so the whole bandwidth thing becomes a pain; the simplest
> > solution is to detect the throttle at task-pick time, dequeue and try
> > again. But that is indeed quite
On Thu, Sep 27, 2018 at 11:36:34AM -0700, Subhra Mazumdar wrote:
>
>
> On 09/26/2018 02:58 AM, Jan H. Schönherr wrote:
> >On 09/17/2018 02:25 PM, Peter Zijlstra wrote:
> >>On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
> >>
> >>>Assuming, there is a cgroup-less solution that
On Thu, Sep 27, 2018 at 11:36:34AM -0700, Subhra Mazumdar wrote:
>
>
> On 09/26/2018 02:58 AM, Jan H. Schönherr wrote:
> >On 09/17/2018 02:25 PM, Peter Zijlstra wrote:
> >>On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
> >>
> >>>Assuming, there is a cgroup-less solution that
On Wed, Nov 21, 2018 at 09:18:19AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 20, 2018 at 11:40:22PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 20, 2018 at 03:23:06PM +0100, Peter Zijlstra wrote:
> > > On Wed, Nov 14, 2018 at 03:46:04AM +0100, Frederic Weisbecker wrote:
&g
On Wed, Nov 21, 2018 at 09:18:19AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 20, 2018 at 11:40:22PM +0100, Frederic Weisbecker wrote:
> > On Tue, Nov 20, 2018 at 03:23:06PM +0100, Peter Zijlstra wrote:
> > > On Wed, Nov 14, 2018 at 03:46:04AM +0100, Frederic Weisbecker wrote:
&g
On Tue, Nov 20, 2018 at 03:23:06PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:46:04AM +0100, Frederic Weisbecker wrote:
>
> > +void kcpustat_cputime(struct kernel_cpustat *kcpustat, int cpu,
> > + u64 *user, u64 *nice, u64 *system,
> > +
On Tue, Nov 20, 2018 at 03:23:06PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:46:04AM +0100, Frederic Weisbecker wrote:
>
> > +void kcpustat_cputime(struct kernel_cpustat *kcpustat, int cpu,
> > + u64 *user, u64 *nice, u64 *system,
> > +
On Tue, Nov 20, 2018 at 03:24:22PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:46:05AM +0100, Frederic Weisbecker wrote:
> > /* Copy values here to work around gcc-2.95.3, gcc-2.96 */
>
> Just a note to let you know the current minimum GCC version i
On Tue, Nov 20, 2018 at 03:24:22PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 14, 2018 at 03:46:05AM +0100, Frederic Weisbecker wrote:
> > /* Copy values here to work around gcc-2.95.3, gcc-2.96 */
>
> Just a note to let you know the current minimum GCC version i
On Mon, Nov 19, 2018 at 01:39:02PM -0800, syzbot wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:bae4e109837b mlxsw: spectrum: Expose discard counters via ..
> git tree: net-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=11b5e77b40
>
On Mon, Nov 19, 2018 at 01:39:02PM -0800, syzbot wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:bae4e109837b mlxsw: spectrum: Expose discard counters via ..
> git tree: net-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=11b5e77b40
>
source than the task passed in parameter
on accounting time. So allow the callers of account_user/guest_time() to
pass custom kcpustat destination index fields.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc
source than the task passed in parameter
on accounting time. So allow the callers of account_user/guest_time() to
pass custom kcpustat destination index fields.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
drivers
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
drivers
This function deals with the previous and next tasks during a context
switch. But only the previous is passed as an argument, the next task
being deduced from current. Make the code clearer by passing both
previous and next as arguments.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
drivers
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
drivers
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
drivers
This function deals with the previous and next tasks during a context
switch. But only the previous is passed as an argument, the next task
being deduced from current. Make the code clearer by passing both
previous and next as arguments.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
drivers
This allows us to check if a remote CPU runs context tracking
(ie: is nohz_full). We'll need that to reliably support "nice"
accounting on kcpustat.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: I
task_group_account_field() parameters to comply with those of
account_*_time_index().
* Rename task_group_account_field()'s tmp parameter to cputime
* Precise the type of index in task_group_account_field(): enum cpu_usage_stat
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik
This allows us to check if a remote CPU runs context tracking
(ie: is nohz_full). We'll need that to reliably support "nice"
accounting on kcpustat.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: I
task_group_account_field() parameters to comply with those of
account_*_time_index().
* Rename task_group_account_field()'s tmp parameter to cputime
* Precise the type of index in task_group_account_field(): enum cpu_usage_stat
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik
Remove the superfluous "is" in the middle of the name. We want to
standardize the naming so that it can be expanded through suffixes:
context_tracking_enabled()
context_tracking_enabled_cpu()
context_tracking_enabled_this_cpu()
Signed-off-by: Frederic Weis
-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/kernel_stat.h | 25 +
kernel/sched/cputime.c | 90 +
2 files changed, 115 insertions
Record guest as a VTIME state instead of guessing it from VTIME_SYS and
PF_VCPU. This is going to simplify the cputime read side especially as
its state machine is going to further expand in order to fully support
kcpustat on nohz_full.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
fs/proc
Standardize the naming on top of the vtime_accounting_enabled_*() base.
Also make it clear we are checking the vtime state of the
*current* CPU with this function. We'll need to add an API to check that
state on remote CPUs as well, so we must disambiguate the naming.
Signed-off-by: Frederic
in place.
The vtime update in question consists in flushing the pending vtime
delta to the task/kcpustat and resume the accounting on top of the new
nice value.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo
This allows us to check if a remote CPU runs vtime accounting
(ie: is nohz_full). We'll need that to reliably support "nice"
accounting on kcpustat.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: I
Remove the superfluous "is" in the middle of the name. We want to
standardize the naming so that it can be expanded through suffixes:
context_tracking_enabled()
context_tracking_enabled_cpu()
context_tracking_enabled_this_cpu()
Signed-off-by: Frederic Weis
-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/kernel_stat.h | 25 +
kernel/sched/cputime.c | 90 +
2 files changed, 115 insertions
Record guest as a VTIME state instead of guessing it from VTIME_SYS and
PF_VCPU. This is going to simplify the cputime read side especially as
its state machine is going to further expand in order to fully support
kcpustat on nohz_full.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc
Now that we have a vtime safe kcpustat accessor, use it to fix frozen
kcpustat values on nohz_full CPUs.
Reported-by: Yauheni Kaliuta
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
fs/proc
Standardize the naming on top of the vtime_accounting_enabled_*() base.
Also make it clear we are checking the vtime state of the
*current* CPU with this function. We'll need to add an API to check that
state on remote CPUs as well, so we must disambiguate the naming.
Signed-off-by: Frederic
in place.
The vtime update in question consists in flushing the pending vtime
delta to the task/kcpustat and resume the accounting on top of the new
nice value.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo
This allows us to check if a remote CPU runs vtime accounting
(ie: is nohz_full). We'll need that to reliably support "nice"
accounting on kcpustat.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: I
This function is a leftover from old removal or rename. We can drop it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking_state.h | 1 -
1 file changed, 1
ed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/kernel_stat.h | 1 +
kernel/sched/cputime.c | 11 ++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/include
that we shouldn't track the exiting task any further.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/sched.h | 2 ++
include/linux/vtime.h | 2 ++
kernel/exit.c | 1
beginning
(T0) and spuriously account 1 second nice time on kcpustat instead of 1
second user time.
So we need to track the nice value changes under vtime seqcount. Start
with context switches and account the vtime nice-ness on top of it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking.h | 2 +-
include/linux/context_tracking_state.h | 4 ++--
include/linux/vtime.h | 2 +-
3 files changed, 4
This function is a leftover from old removal or rename. We can drop it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking_state.h | 1 -
1 file changed, 1
ed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/kernel_stat.h | 1 +
kernel/sched/cputime.c | 11 ++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/include
that we shouldn't track the exiting task any further.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/sched.h | 2 ++
include/linux/vtime.h | 2 ++
kernel/exit.c | 1
beginning
(T0) and spuriously account 1 second nice time on kcpustat instead of 1
second user time.
So we need to track the nice value changes under vtime seqcount. Start
with context switches and account the vtime nice-ness on top of it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/context_tracking.h | 2 +-
include/linux/context_tracking_state.h | 4 ++--
include/linux/vtime.h | 2 +-
3 files changed, 4
Record idle as a VTIME state instead of guessing it from VTIME_SYS and
is_idle_task(). This is going to simplify the cputime read side
especially as its state machine is going to further expand in order to
fully support kcpustat on nohz_full.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni
Record idle as a VTIME state instead of guessing it from VTIME_SYS and
is_idle_task(). This is going to simplify the cputime read side
especially as its state machine is going to further expand in order to
fully support kcpustat on nohz_full.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni
://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
nohz/kcpustat
HEAD: c7c45c06334346f62dbbf7bb12e2a8ab954532e5
Thanks,
Frederic
---
Frederic Weisbecker (25):
sched/vtime: Fix guest/system mis-accounting on task switch
sched/vtime: Protect idle accounting
://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
nohz/kcpustat
HEAD: c7c45c06334346f62dbbf7bb12e2a8ab954532e5
Thanks,
Frederic
---
Frederic Weisbecker (25):
sched/vtime: Fix guest/system mis-accounting on task switch
sched/vtime: Protect idle accounting
if the target is running in
guest mode. We may then spuriously account or leak either system or
guest time on task switch.
Fix this assumption and also turn vtime_guest_enter/exit() to use the
task passed in parameter as well to avoid future similar issues.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni
ing kernel cputime. Whether it belongs to guest or system time
is a lower level detail.
Rename this function to vtime_account_kernel(). This will clarify things
and avoid too many underscored vtime_account_system() versions.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
C
artificial
code factorization so split the task switch code between GEN and
NATIVE and mutualize the parts than can run under a single seqcount
locked block.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo
moving forward on full nohz CPUs.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/sched.h | 1 +
kernel/sched/cputime.c | 3 +++
2 files changed, 4 insertions(+)
diff --git
readers use the
traditional ad-hoc nohz time delta. We may want to consider moving
readers to use vtime to consolidate the overall accounting scheme. The
seqcount will be a functional requirement for it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc
if the target is running in
guest mode. We may then spuriously account or leak either system or
guest time on task switch.
Fix this assumption and also turn vtime_guest_enter/exit() to use the
task passed in parameter as well to avoid future similar issues.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni
ing kernel cputime. Whether it belongs to guest or system time
is a lower level detail.
Rename this function to vtime_account_kernel(). This will clarify things
and avoid too many underscored vtime_account_system() versions.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
C
artificial
code factorization so split the task switch code between GEN and
NATIVE and mutualize the parts than can run under a single seqcount
locked block.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo
moving forward on full nohz CPUs.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc: Peter Zijlstra
Cc: Wanpeng Li
Cc: Ingo Molnar
---
include/linux/sched.h | 1 +
kernel/sched/cputime.c | 3 +++
2 files changed, 4 insertions(+)
diff --git
readers use the
traditional ad-hoc nohz time delta. We may want to consider moving
readers to use vtime to consolidate the overall accounting scheme. The
seqcount will be a functional requirement for it.
Signed-off-by: Frederic Weisbecker
Cc: Yauheni Kaliuta
Cc: Thomas Gleixner
Cc: Rik van Riel
Cc
On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
> On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
> >
> > Now, it would be possible to "invent" relocatable cpusets to address
> > that
> > issue ("I want affinity restricted to a core, I don't care which"),
> > but
> >
On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
> On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
> >
> > Now, it would be possible to "invent" relocatable cpusets to address
> > that
> > issue ("I want affinity restricted to a core, I don't care which"),
> > but
> >
On Fri, Oct 19, 2018 at 01:40:03PM +0200, Jan H. Schönherr wrote:
> On 17/10/2018 04.09, Frederic Weisbecker wrote:
> > On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
> >> C) How does it work?
> >>
> [...]
> >> F
On Fri, Oct 19, 2018 at 01:40:03PM +0200, Jan H. Schönherr wrote:
> On 17/10/2018 04.09, Frederic Weisbecker wrote:
> > On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
> >> C) How does it work?
> >>
> [...]
> >> F
On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
> C) How does it work?
>
>
> This patch series introduces hierarchical runqueues, that represent larger
> and larger fractions of the system. By default, there is one runqueue per
> scheduling domain. These
On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
> C) How does it work?
>
>
> This patch series introduces hierarchical runqueues, that represent larger
> and larger fractions of the system. By default, there is one runqueue per
> scheduling domain. These
On Tue, Oct 16, 2018 at 04:03:59PM -0600, Jonathan Corbet wrote:
> On Thu, 11 Oct 2018 01:11:47 +0200
> Frederic Weisbecker wrote:
>
> > 945 files changed, 13857 insertions(+), 9767 deletions(-)
>
> Impressive :)
In the wrong way :)
>
> I have to ask a du
On Tue, Oct 16, 2018 at 04:03:59PM -0600, Jonathan Corbet wrote:
> On Thu, 11 Oct 2018 01:11:47 +0200
> Frederic Weisbecker wrote:
>
> > 945 files changed, 13857 insertions(+), 9767 deletions(-)
>
> Impressive :)
In the wrong way :)
>
> I have to ask a du
On Mon, Oct 15, 2018 at 10:28:44PM -0700, Joel Fernandes wrote:
> > diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
> > index f8ec3d4..490358c 100644
> > --- a/crypto/pcrypt.c
> > +++ b/crypto/pcrypt.c
> > @@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
> > static int pcrypt_do_parallel(struct
On Mon, Oct 15, 2018 at 10:28:44PM -0700, Joel Fernandes wrote:
> > diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
> > index f8ec3d4..490358c 100644
> > --- a/crypto/pcrypt.c
> > +++ b/crypto/pcrypt.c
> > @@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
> > static int pcrypt_do_parallel(struct
Hi Pavan,
On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote:
> Hi Frederic,
>
> On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> > From: Frederic Weisbecker
> >
> > Make do_softirq() re-entrant and allow a vector, being eithe
Hi Pavan,
On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote:
> Hi Frederic,
>
> On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> > From: Frederic Weisbecker
> >
> > Make do_softirq() re-entrant and allow a vector, being eithe
puidle_enter_state
=> cpuidle_enter
=> call_cpuidle
=> do_idle
So that's enough to start a debate.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
irq/softirq-experimental
HEAD: 84e064f678eb06d0da3e97f04eced4cfb55866ba
Thanks,
Frederic
puidle_enter_state
=> cpuidle_enter
=> call_cpuidle
=> do_idle
So that's enough to start a debate.
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
irq/softirq-experimental
HEAD: 84e064f678eb06d0da3e97f04eced4cfb55866ba
Thanks,
Frederic
);
+ bh = diva_os_enter_spin_lock(e, e1, e2);
...
- diva_os_leave_spin_lock(e, e1, e2);
+ diva_os_leave_spin_lock(e, e1, e2, bh);
...
}
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
);
+ bh = diva_os_enter_spin_lock(e, e1, e2);
...
- diva_os_leave_spin_lock(e, e1, e2);
+ diva_os_leave_spin_lock(e, e1, e2, bh);
...
}
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
From: Frederic Weisbecker
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable
on __local_bh_enable_ip().
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho Chehab
Cc: Paul E. McKenney
---
include/linux/bottom_half.h | 19
From: Frederic Weisbecker
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable
on __local_bh_enable_ip().
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho Chehab
Cc: Paul E. McKenney
---
include/linux/bottom_half.h | 19
able(bh1) {
local_bh_disabled() = bh;
preempt_count -= SOFTIRQ_DISABLE_OFFSET;
}
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho Chehab
able(bh1) {
local_bh_disabled() = bh;
preempt_count -= SOFTIRQ_DISABLE_OFFSET;
}
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho Chehab
From: Frederic Weisbecker
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call
From: Frederic Weisbecker
Tasklets and net-rx vectors don't quite get along. If one is interrupted
by another, we may run into a nasty spin_lock recursion:
[ 135.427198] Call Trace:
[ 135.429650]
[ 135.431690] dump_stack+0x67/0x95
[ 135.435024] spin_bug
From: Frederic Weisbecker
Make do_softirq() re-entrant and allow a vector, being either processed
or disabled, to be interrupted by another vector. This way a vector
won't be able to monopolize the CPU for a long while at the expense of
the others that may rely on some predictable latency
From: Frederic Weisbecker
Disable a vector while it is being processed. This prepare for softirq
re-entrancy with an obvious single constraint: a vector can't be
interrupted by itself.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc
801 - 900 of 8299 matches
Mail list logo