less frequently.
Signed-off-by: Rik van Riel
---
kernel/sched/fair.c | 25 -
1 file changed, 8 insertions(+), 17 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c6ede2ecc935..35153a89d5c5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
nce it no longer has children
on the list, we can avoid walking the sched_entity hierarchy if the bottom
cfs_rq is on the list, once the runqueues have been flattened.
Signed-off-by: Rik van Riel
---
kernel/sched/fair.c | 17 +
kernel/sched/sched.h | 1 +
2 files changed, 18 inserti
-off-by: Rik van Riel
---
include/linux/sched.h | 2 +
kernel/sched/fair.c | 478 +-
kernel/sched/pelt.c | 6 +-
kernel/sched/pelt.h | 2 +-
kernel/sched/sched.h | 2 +-
5 files changed, 194 insertions(+), 296 deletions(-)
diff --git a/include
Sometimes the hierarchical load of a sched_entity needs to be calculated.
Split out task_h_load into a task_se_h_load that takes a sched_entity pointer
as its argument, and a task_h_load wrapper that calls task_se_h_load.
No functional changes.
Signed-off-by: Rik van Riel
---
kernel/sched
Use an explicit "cfs_rq of parent sched_entity" helper in a few
strategic places, where cfs_rq_of(se) may no longer point at the
right runqueue once we flatten the hierarchical cgroup runqueues.
No functional change.
Signed-off-by: Rik van Riel
---
kernel/sched/fair.c | 17 ++
runqueue for the CPU controller.
Signed-off-by: Rik van Riel
---
include/linux/sched.h | 3 +-
kernel/sched/core.c | 2 -
kernel/sched/debug.c | 1 +
kernel/sched/fair.c | 125 +-
kernel/sched/pelt.c | 49 ++---
kernel/sched/sched.h
Remove some fields from /proc/sched_debug that are removed from
sched_entity in a subsequent patch, and add h_load, which comes in
very handy to debug CPU controller weight distribution.
Signed-off-by: Rik van Riel
---
kernel/sched/debug.c | 11 ++-
1 file changed, 2 insertions(+), 9
The current implementation of the CPU controller uses hierarchical
runqueues, where on wakeup a task is enqueued on its group's runqueue,
the group is enqueued on the runqueue of the group above it, etc.
This increases a fairly large amount of overhead for workloads that
do a lot of wakeups a
On Wed, 2019-05-29 at 12:54 +0800, Zhenzhong Duan wrote:
> Hi Maintainers,
>
> A question raised when I learned below code. Appreciate any help me
> understand the code.
>
> void native_flush_tlb_others(const struct cpumask *cpumask,
> const struct flush_tlb_info
On Mon, 2019-05-27 at 07:21 +0100, Dietmar Eggemann wrote:
> This is done to align the per cpu (i.e. per rq) load with the util
> counterpart (cpu_util(int cpu)). The term 'weighted' is not needed
> since there is no 'unweighted' load to distinguish it from.
I can see why you want to make
On Mon, 2019-05-27 at 07:21 +0100, Dietmar Eggemann wrote:
> Since sg_lb_stats::sum_weighted_load is now identical with
> sg_lb_stats::group_load remove it and replace its use case
> (calculating load per task) with the latter.
>
> Signed-off-by: Dietmar Eggemann
Acked-b
On Mon, 2019-05-27 at 07:21 +0100, Dietmar Eggemann wrote:
> The sched domain per rq load index files also disappear from the
> /proc/sys/kernel/sched_domain/cpuX/domainY directories.
>
> Signed-off-by: Dietmar Eggemann
Acked-by: Rik van Riel
signature.asc
Description: This is
On Mon, 2019-05-27 at 07:21 +0100, Dietmar Eggemann wrote:
> The per rq load array values also disappear from the cpu#X sections
> in
> /proc/sched_debug.
>
> Signed-off-by: Dietmar Eggemann
Acked-by: Rik van Riel
signature.asc
Description: This is a digitally signed message part
ed before that close parenthesis ')'
>
> Signed-off-by: Dietmar Eggemann
Acked-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
On Mon, 2019-05-27 at 07:21 +0100, Dietmar Eggemann wrote:
> With LB_BIAS disabled, there is no need to update the rq-
> >cpu_load[idx]
> any more.
>
> Signed-off-by: Dietmar Eggemann
Acked-by: Rik van Riel
signature.asc
Description: This is a digitally signed message part
_idx(), can be removed as well.
>
> Finally, get rid of the sched feature LB_BIAS.
>
> Signed-off-by: Dietmar Eggemann
Acked-by: Rik van Riel
signature.asc
Description: This is a digitally signed message part
On Wed, 2019-05-22 at 16:49 +0200, Peter Zijlstra wrote:
> On Wed, May 22, 2019 at 03:37:11PM +0100, Andrew Murray wrote:
> > > Is perhaps the problem that on_each_cpu_cond() uses
> > > cpu_onlne_mask
> > > without protection?
> >
> > Does this prevent racing with a CPU going offline? I guess
w users of kvm_make_request to overflow the vcpu.requests bitmask,
and is confusing to developers examining the code.
Redefine KVM_REQUEST_MASK to reflect the number of bits that actually
fit inside an unsigned long, and add a comment explaining set_bit and
friends take bit numbers, not a bitmask.
Signed-off-by:
On Thu, 2019-05-09 at 07:32 +1000, Dave Chinner wrote:
> Hmmm, the first wakeup in xsdc is this one, right:
>
> /* wake up threads waiting in xfs_log_force() */
> wake_up_all(>ic_force_wait);
>
> At the end of the iclog iteration loop? That one is under the
>
On Wed, 2019-05-08 at 07:22 +1000, Dave Chinner wrote:
> On Tue, May 07, 2019 at 01:05:28PM -0400, Rik van Riel wrote:
> > The code in xlog_wait uses the spinlock to make adding the task to
> > the wait queue, and setting the task state to UNINTERRUPTIBLE
> > atomic
> >
the l_icloglock
is already used inside xlog_wait; it is unclear why the waker was doing
things differently.
Signed-off-by: Rik van Riel
Reported-by: Chris Mason
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index c3b610b687d1..8b9be76b2412 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
On Sat, 2019-05-04 at 04:28 +0200, Sebastian Gottschall wrote:
> Using fpu code in kernel space in a kernel module is a derived work
> of
> the kernel itself?
> dont get me wrong, but this is absurd. i mean you limit the use of
> cpu
> instructions. the use
> of cpu instructions should be free
Commit-ID: 5f409e20b794565e2d60ad333e79334630a6c798
Gitweb: https://git.kernel.org/tip/5f409e20b794565e2d60ad333e79334630a6c798
Author: Rik van Riel
AuthorDate: Wed, 3 Apr 2019 18:41:52 +0200
Committer: Borislav Petkov
CommitDate: Fri, 12 Apr 2019 19:34:47 +0200
x86/fpu: Defer FPU
Commit-ID: a352a3b7b7920212ee4c45a41500c66826318e92
Gitweb: https://git.kernel.org/tip/a352a3b7b7920212ee4c45a41500c66826318e92
Author: Rik van Riel
AuthorDate: Wed, 3 Apr 2019 18:41:47 +0200
Committer: Borislav Petkov
CommitDate: Thu, 11 Apr 2019 18:20:04 +0200
x86/fpu: Prepare
Commit-ID: 69277c98f5eef0d9839699b7825923c3985f665f
Gitweb: https://git.kernel.org/tip/69277c98f5eef0d9839699b7825923c3985f665f
Author: Rik van Riel
AuthorDate: Wed, 3 Apr 2019 18:41:46 +0200
Committer: Borislav Petkov
CommitDate: Thu, 11 Apr 2019 18:08:57 +0200
x86/fpu: Always store
Commit-ID: 0cecca9d03c964abbd2b7927d0670eb70db4ebf2
Gitweb: https://git.kernel.org/tip/0cecca9d03c964abbd2b7927d0670eb70db4ebf2
Author: Rik van Riel
AuthorDate: Wed, 3 Apr 2019 18:41:44 +0200
Committer: Borislav Petkov
CommitDate: Thu, 11 Apr 2019 15:57:10 +0200
x86/fpu: Eager switch
Commit-ID: 4ee91519e1dccc175665fe24bb20a47c6053575c
Gitweb: https://git.kernel.org/tip/4ee91519e1dccc175665fe24bb20a47c6053575c
Author: Rik van Riel
AuthorDate: Wed, 3 Apr 2019 18:41:38 +0200
Committer: Borislav Petkov
CommitDate: Wed, 10 Apr 2019 16:23:14 +0200
x86/fpu: Add
On Wed, 2019-04-10 at 18:43 -0700, Suren Baghdasaryan via Lsf-pc wrote:
> The time to kill a process and free its memory can be critical when
> the
> killing was done to prevent memory shortages affecting system
> responsiveness.
The OOM killer is fickle, and often takes a fairly
long time to
On Wed, 2019-04-10 at 16:11 -0700, Eric Biggers wrote:
> You've explained *what* it does again, but not *why*. *Why* do you
> want
> hardened usercopy to detect copies across page boundaries, when there
> is no
> actual buffer overflow?
When some subsystem in the kernel allocates multiple
pages
On Tue, 2019-02-26 at 15:04 +0300, Andrey Ryabinin wrote:
> I think we should leave anon aging only for !SCAN_FILE cases.
> At least aging was definitely invented for the SCAN_FRACT mode which
> was the
> main mode at the time it was added by the commit:
> and I think would be reasonable to
On Fri, 2019-02-22 at 20:43 +0300, Andrey Ryabinin wrote:
> workingset_eviction() doesn't use and never did use the @mapping
> argument.
> Remove it.
>
> Signed-off-by: Andrey Ryabinin
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Cc: Vlastimil Babka
> Cc: Rik van Rie
; Cc: Vlastimil Babka
> Cc: Rik van Riel
> Cc: Mel Gorman
Acked-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
On Fri, 2019-02-22 at 20:58 +0300, Andrey Ryabinin wrote:
> In a presence of more than 1 memory cgroup in the system our reclaim
> logic is just suck. When we hit memory limit (global or a limit on
> cgroup with subgroups) we reclaim some memory from all cgroups.
> This is sucks because, the
On Mon, 2019-02-18 at 14:43 +0100, Greg Kroah-Hartman wrote:
> 4.20-stable review patch. If anyone has any objections, please let
> me know.
>
> --
>
> From: Dave Chinner
>
> commit a9a238e83fbb0df31c3b9b67003f8f9d1d1b6c96 upstream.
>
> This reverts commit 172b06c32b9497
On Mon, 2019-01-28 at 12:10 -0800, Andrew Morton wrote:
> On Mon, 28 Jan 2019 15:03:28 -0500 Rik van Riel
> wrote:
>
> > On Mon, 2019-01-28 at 11:54 -0800, Andrew Morton wrote:
> > > On Mon, 28 Jan 2019 14:35:35 -0500 Rik van Riel > > >
&
On Mon, 2019-01-28 at 11:54 -0800, Andrew Morton wrote:
> On Mon, 28 Jan 2019 14:35:35 -0500 Rik van Riel
> wrote:
>
> > /*
> > * Make sure we apply some minimal pressure on default priority
> > -* even on small cgroups. Stale objects are no
There are a few issues with the way the number of slab objects to
scan is calculated in do_shrink_slab. First, for zero-seek slabs,
we could leave the last object around forever. That could result
in pinning a dying cgroup into memory, instead of reclaiming it.
The fix for that is trivial.
On Thu, 2019-01-10 at 12:26 +1100, Dave Chinner wrote:
> On Wed, Jan 09, 2019 at 08:17:31PM +0530, Pankaj Gupta wrote:
> > This patch series has implementation for "virtio pmem".
> > "virtio pmem" is fake persistent memory(nvdimm) in guest
> > which allows to bypass the guest page cache. This
On Tue, 2019-01-08 at 21:36 -0800, Shakeel Butt wrote:
> On Tue, Jan 8, 2019 at 8:01 PM Rik van Riel wrote:
> >
> > There is an imbalance between when slab_pre_alloc_hook calls
> > memcg_kmem_get_cache and when slab_post_alloc_hook calls
> > memcg_kmem_put_cache.
&g
e location to
another. I am still tagging that changeset, because the fix should
automatically apply that far back.
Signed-off-by: Rik van Riel
Fixes: 452647784b2f ("mm: memcontrol: cleanup kmem charge functions")
Cc: kernel-t...@fb.com
Cc: linux...@kvack.org
Cc: sta...@vger.kernel.org
Cc:
> this patch fixes. So, the same crash can happen if the memcg charge
> of
> a cached stack is failed.
>
> Fixes: 5eed6f1dff87 ("fork,memcg: fix crash in free_thread_stack on
> memcg charge fail")
> Signed-off-by: Shakeel Butt
> Cc: Rik van Riel
> Cc: Roman Gushc
: rework memcg kernel stack accounting")
Cc: Andrew Morton
Cc: Shakeel Butt
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Tejun Heo
Cc: Roman Gushchin
Signed-off-by: Rik van Riel
---
kernel/fork.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/fork.c b/kernel
On Wed, 2018-11-28 at 23:20 +0100, Sebastian Andrzej Siewior wrote:
>
> + * Use kernel_fpu_begin/end() if you intend to use FPU in kernel
> context. It
> + * disables preemption so be carefull if you intend to use it for
> long periods
Just how careful do you want to be?
One l seems sufficient
On Wed, 2018-11-28 at 23:20 +0100, Sebastian Andrzej Siewior wrote:
>
> + * Use kernel_fpu_begin/end() if you intend to use FPU in kernel
> context. It
> + * disables preemption so be carefull if you intend to use it for
> long periods
Just how careful do you want to be?
One l seems sufficient
ased on that the function works for compacted buffers as long as the
> CPU supports it and this what we care about.
>
> Remove the "Note:" which is not accurate.
>
> Suggested-by: Paolo Bonzini
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
ased on that the function works for compacted buffers as long as the
> CPU supports it and this what we care about.
>
> Remove the "Note:" which is not accurate.
>
> Suggested-by: Paolo Bonzini
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
ed.
Nice catch.
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
ed.
Nice catch.
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
ed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
On Wed, 2018-11-28 at 23:20 +0100, Sebastian Andrzej Siewior wrote:
> The variable init_pkru_value isn't used outside of this file.
> Make init_pkru_value static.
>
> Acked-by: Dave Hansen
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All
ed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
On Wed, 2018-11-28 at 23:20 +0100, Sebastian Andrzej Siewior wrote:
> The variable init_pkru_value isn't used outside of this file.
> Make init_pkru_value static.
>
> Acked-by: Dave Hansen
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All
stian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
stian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
en this check
> won't
> catch it.
>
> Use BIT_ULL() to compute a mask from a number.
>
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
en this check
> won't
> catch it.
>
> Use BIT_ULL() to compute a mask from a number.
>
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
int ret = 0;
> >
> > - if (memcg_kmem_bypass())
> > + if (mem_cgroup_disabled() || memcg_kmem_bypass())
> > return 0;
> >
>
> Why not check memcg_kmem_enabled() before calling memcg_kmem_charge()
> in memcg_charge_kernel_stack()?
Check Roman's backtrace again. The function
memcg_charge_kernel_stack() is not in it.
That is why it is generally better to check
in the called function, rather than add a
check to every call site (and maybe miss one
or two).
Acked-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
int ret = 0;
> >
> > - if (memcg_kmem_bypass())
> > + if (mem_cgroup_disabled() || memcg_kmem_bypass())
> > return 0;
> >
>
> Why not check memcg_kmem_enabled() before calling memcg_kmem_charge()
> in memcg_charge_kernel_stack()?
Check Roman's backtrace again. The function
memcg_charge_kernel_stack() is not in it.
That is why it is generally better to check
in the called function, rather than add a
check to every call site (and maybe miss one
or two).
Acked-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
On Wed, 2018-10-24 at 07:53 +0200, Ingo Molnar wrote:
> * Rik van Riel wrote:
>
> > The big thing remaining is the reference count overhead of
> > the lazy TLB mm_struct, but getting rid of that is rather a
> > lot of code for a small performance gain. Not qu
On Wed, 2018-10-24 at 07:53 +0200, Ingo Molnar wrote:
> * Rik van Riel wrote:
>
> > The big thing remaining is the reference count overhead of
> > the lazy TLB mm_struct, but getting rid of that is rather a
> > lot of code for a small performance gain. Not qu
On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
> > On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
> > >
> > > Now, it would be possible to "invent"
On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
> > On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
> > >
> > > Now, it would be possible to "invent"
On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
>
> Now, it would be possible to "invent" relocatable cpusets to address
> that
> issue ("I want affinity restricted to a core, I don't care which"),
> but
> then, the current way how cpuset affinity is enforced doesn't scale
> for
>
On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
>
> Now, it would be possible to "invent" relocatable cpusets to address
> that
> issue ("I want affinity restricted to a core, I don't care which"),
> but
> then, the current way how cpuset affinity is enforced doesn't scale
> for
>
ere may well be workloads where we
should just put a hard cap on the number of
freeable items these slabs, and reclaim them
preemptively.
However, I do not know for sure, and this patch
seems like a big improvement over what we had
before, so ...
> Reported-by: Domas Mituzas
> Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
ere may well be workloads where we
should just put a hard cap on the number of
freeable items these slabs, and reclaim them
preemptively.
However, I do not know for sure, and this patch
seems like a big improvement over what we had
before, so ...
> Reported-by: Domas Mituzas
> Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
On Tue, 2018-10-09 at 14:47 -0400, Johannes Weiner wrote:
> No need to use the preemption-safe lruvec state function inside the
> reclaim region that has irqs disabled.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
On Tue, 2018-10-09 at 14:47 -0400, Johannes Weiner wrote:
> No need to use the preemption-safe lruvec state function inside the
> reclaim region that has irqs disabled.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
desired_size * nr_online_nodes.
>
> Switch to NUMA-aware lru and slab counters to approximate cgroup
> size.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
desired_size * nr_online_nodes.
>
> Switch to NUMA-aware lru and slab counters to approximate cgroup
> size.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
Commit-ID: 145f573b89a62bf53cfc0144fa9b1c56b0f70b45
Gitweb: https://git.kernel.org/tip/145f573b89a62bf53cfc0144fa9b1c56b0f70b45
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:44 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Make lazy
Commit-ID: 145f573b89a62bf53cfc0144fa9b1c56b0f70b45
Gitweb: https://git.kernel.org/tip/145f573b89a62bf53cfc0144fa9b1c56b0f70b45
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:44 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Make lazy
Commit-ID: 97807813fe7074ee865d6bc1df1d0f8fb878ee9d
Gitweb: https://git.kernel.org/tip/97807813fe7074ee865d6bc1df1d0f8fb878ee9d
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:43 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Add
Commit-ID: 97807813fe7074ee865d6bc1df1d0f8fb878ee9d
Gitweb: https://git.kernel.org/tip/97807813fe7074ee865d6bc1df1d0f8fb878ee9d
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:43 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Add
Commit-ID: 016c4d92cd16f569c6485ae62b076c1a4b779536
Gitweb: https://git.kernel.org/tip/016c4d92cd16f569c6485ae62b076c1a4b779536
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:42 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Add
Commit-ID: 016c4d92cd16f569c6485ae62b076c1a4b779536
Gitweb: https://git.kernel.org/tip/016c4d92cd16f569c6485ae62b076c1a4b779536
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:42 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:12 +0200
x86/mm/tlb: Add
Commit-ID: 7d49b28a80b830c3ca876d33bedc58d62a78e16f
Gitweb: https://git.kernel.org/tip/7d49b28a80b830c3ca876d33bedc58d62a78e16f
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:41 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
smp,cpumask: introduce
Commit-ID: 7d49b28a80b830c3ca876d33bedc58d62a78e16f
Gitweb: https://git.kernel.org/tip/7d49b28a80b830c3ca876d33bedc58d62a78e16f
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:41 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
smp,cpumask: introduce
Commit-ID: c3f7f2c7eba1a53d2e5ffbc2dcc9a20c5f094890
Gitweb: https://git.kernel.org/tip/c3f7f2c7eba1a53d2e5ffbc2dcc9a20c5f094890
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:40 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
smp: use
Commit-ID: c3f7f2c7eba1a53d2e5ffbc2dcc9a20c5f094890
Gitweb: https://git.kernel.org/tip/c3f7f2c7eba1a53d2e5ffbc2dcc9a20c5f094890
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:40 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
smp: use
Commit-ID: 12c4d978fd170ccdd7260ec11f93b11e46904228
Gitweb: https://git.kernel.org/tip/12c4d978fd170ccdd7260ec11f93b11e46904228
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:39 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
x86/mm/tlb: Restructure
Commit-ID: 12c4d978fd170ccdd7260ec11f93b11e46904228
Gitweb: https://git.kernel.org/tip/12c4d978fd170ccdd7260ec11f93b11e46904228
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:39 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
x86/mm/tlb: Restructure
Commit-ID: 5462bc3a9a3c38328bbbd276d51164c7cf21d6a8
Gitweb: https://git.kernel.org/tip/5462bc3a9a3c38328bbbd276d51164c7cf21d6a8
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:38 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
x86/mm/tlb: Always use
Commit-ID: 5462bc3a9a3c38328bbbd276d51164c7cf21d6a8
Gitweb: https://git.kernel.org/tip/5462bc3a9a3c38328bbbd276d51164c7cf21d6a8
Author: Rik van Riel
AuthorDate: Tue, 25 Sep 2018 23:58:38 -0400
Committer: Peter Zijlstra
CommitDate: Tue, 9 Oct 2018 16:51:11 +0200
x86/mm/tlb: Always use
On Thu, 2018-10-04 at 16:05 +0200, Sebastian Andrzej Siewior wrote:
> In v3 I dropped that decouple idea. I also learned that the wrpkru
> instruction is not privileged and so caching it in kernel does not
> work.
Wait, so any thread can bypass its memory protection
keys, even if there is a
On Thu, 2018-10-04 at 16:05 +0200, Sebastian Andrzej Siewior wrote:
> In v3 I dropped that decouple idea. I also learned that the wrpkru
> instruction is not privileged and so caching it in kernel does not
> work.
Wait, so any thread can bypass its memory protection
keys, even if there is a
On Tue, 2018-10-02 at 09:44 +0200, Peter Zijlstra wrote:
> On Tue, Sep 25, 2018 at 11:58:37PM -0400, Rik van Riel wrote:
>
> > This v2 is "identical" to the version I posted yesterday,
> > except this one is actually against current -tip (not sure
> > what
On Tue, 2018-10-02 at 09:44 +0200, Peter Zijlstra wrote:
> On Tue, Sep 25, 2018 at 11:58:37PM -0400, Rik van Riel wrote:
>
> > This v2 is "identical" to the version I posted yesterday,
> > except this one is actually against current -tip (not sure
> > what
On Mon, 2018-10-01 at 17:58 +0200, Peter Zijlstra wrote:
> On Tue, Sep 25, 2018 at 11:58:38PM -0400, Rik van Riel wrote:
> > Now that CPUs in lazy TLB mode no longer receive TLB shootdown
> > IPIs, except
> > at page table freeing time, and idle CPUs will no longer g
On Mon, 2018-10-01 at 17:58 +0200, Peter Zijlstra wrote:
> On Tue, Sep 25, 2018 at 11:58:38PM -0400, Rik van Riel wrote:
> > Now that CPUs in lazy TLB mode no longer receive TLB shootdown
> > IPIs, except
> > at page table freeing time, and idle CPUs will no longer g
t; ( 10.40%)35969.88 ( 10.51%)
>
> Signed-off-by: Mel Gorman
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
t; ( 10.40%)35969.88 ( 10.51%)
>
> Signed-off-by: Mel Gorman
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part
scale30115.06 ( 0.00%)31293.06 ( 3.91%)
> MB/sec add 32825.12 ( 0.00%)34883.62 ( 6.27%)
> MB/sec triad32549.52 ( 0.00%)34906.60 ( 7.24%
>
> Signed-off-by: Mel Gorman
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Descripti
scale30115.06 ( 0.00%)31293.06 ( 3.91%)
> MB/sec add 32825.12 ( 0.00%)34883.62 ( 6.27%)
> MB/sec triad32549.52 ( 0.00%)34906.60 ( 7.24%
>
> Signed-off-by: Mel Gorman
Reviewed-by: Rik van Riel
--
All Rights Reversed.
signature.asc
Descripti
ode remain part of the mm_cpumask(mm), both because
that allows TLB flush IPIs to be sent at page table freeing time, and
because the cache line bouncing on the mm_cpumask(mm) was responsible
for about half the CPU use in switch_mm_irqs_off().
Tested-by: Song Liu
Signed-off-by: Rik van Riel
---
a
Introduce a variant of on_each_cpu_cond that iterates only over the
CPUs in a cpumask, in order to avoid making callbacks for every single
CPU in the system when we only need to test a subset.
Signed-off-by: Rik van Riel
---
include/linux/smp.h | 4
kernel/smp.c| 17
Add an argument to flush_tlb_mm_range to indicate whether page tables
are about to be freed after this TLB flush. This allows for an
optimization of flush_tlb_mm_range to skip CPUs in lazy TLB mode.
No functional changes.
Signed-off-by: Rik van Riel
---
arch/x86/include/asm/tlb.h | 2
Add an argument to flush_tlb_mm_range to indicate whether page tables
are about to be freed after this TLB flush. This allows for an
optimization of flush_tlb_mm_range to skip CPUs in lazy TLB mode.
No functional changes.
Signed-off-by: Rik van Riel
---
arch/x86/include/asm/tlb.h | 2
ode remain part of the mm_cpumask(mm), both because
that allows TLB flush IPIs to be sent at page table freeing time, and
because the cache line bouncing on the mm_cpumask(mm) was responsible
for about half the CPU use in switch_mm_irqs_off().
Tested-by: Song Liu
Signed-off-by: Rik van Riel
---
a
Introduce a variant of on_each_cpu_cond that iterates only over the
CPUs in a cpumask, in order to avoid making callbacks for every single
CPU in the system when we only need to test a subset.
Signed-off-by: Rik van Riel
---
include/linux/smp.h | 4
kernel/smp.c| 17
201 - 300 of 7046 matches
Mail list logo