the ACQUIRE itself.
Use READ_ONCE() to load ->cpu in task_rq() (c.f., task_cpu()) to honour
this address dependency between loads; also, mark the store to ->cpu in
__set_task_cpu() by using WRITE_ONCE() in order to tell the compiler to
not mess/tear this (synchronizing) memory access.
Signed-off
ock accesses). This work is based on an
> initial proposal created by Andrea Parri back in December 2017,
> although it has grown a lot since then.
>
> The adaptation involves two main aspects: recognizing the ordering
> induced by plain accesses and detecting data races. They are han
can be called
> by configuration change, I'll continue to test it.
Hi Masami,
I think I've found another recursion problem. Could you include also
this one?
Thanks,
From: Andrea Righi
Subject: [PATCH] kprobes: prohibit probing on bsearch()
Since kprobe breakpoing handler is using bsearch
//lore.kernel.org/lkml/20190111095108.b79a2ee026185cbd62365...@kernel.org
Fixes: 6212dd29683e ("tracing/kprobes: Use dyn_event framework for kprobe
events")
Cc: sta...@vger.kernel.org
Signed-off-by: Andrea Righi
Signed-off-by: Masami Hiramatsu
---
v2: argument check refactoring
kernel/trace/tra
is regard, well,
except that I think we ought to fix the README, somehow (consider my
diff below as a first proposal). Akira actually preceded me on this
and suggested another solution [1].
Andrea
[1] http://lkml.kernel.org/r/04d15c18-d210-e3da-01e2-483eff135...@gmail.com
>
&
On Thu, Jan 10, 2019 at 08:31:26AM -0800, Paul E. McKenney wrote:
> On Thu, Jan 10, 2019 at 10:41:23AM -0500, Alan Stern wrote:
> > On Thu, 10 Jan 2019, Paul E. McKenney wrote:
> >
> > > On Thu, Jan 10, 2019 at 09:40:24AM +0100, Andrea Parri wrote:
0xf0
? _cond_resched+0x19/0x40
vfs_write+0xb1/0x1a0
ksys_write+0x55/0xc0
__x64_sys_write+0x1a/0x20
do_syscall_64+0x5a/0x120
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Fix by doing the proper argument check when a NULL symbol is passed in
trace_kprobe_create().
Signed-off-by: Andrea Ri
On Thu, Jan 10, 2019 at 01:38:11PM +0100, Dmitry Vyukov wrote:
> On Thu, Jan 10, 2019 at 1:30 PM Andrea Parri
> wrote:
> >
> > > For seqcounts we currently simply ignore all accesses within the read
> > > section (thus the requirement to dynamically track read secti
;
r0 = x;
read_seqretry() // =0
ought to be "racy"..., right? (I didn't audit all the callsites for
read_{seqbegin,seqretry}(), but I wouldn't be surprised to find such
pattern ;D ... "legacy", as you recalled).
Andrea
6 and onwards.
>
> Signed-off-by: Luc Maranget
> Signed-off-by: Paul E. McKenney
> [ paulmck: Apply Andrea Parri's off-list feedback. ]
> ---
> tools/memory-model/linux-kernel.bell | 3 +++
> tools/memory-model/linux-kernel.cat | 2 ++
> tools/memory-model/linux-kernel.d
not in mainline? I am not sure,
> but given that the merge window was over the holiday season and that
> the length of the merge window proved to be shorter than many people
> expected it to be, I am not too surprised. ;-)
Mmh, let me try again:
$ git clone git://git.kernel.or
On Wed, Jan 09, 2019 at 03:40:43PM -0800, Paul E. McKenney wrote:
> On Thu, Jan 10, 2019 at 12:18:53AM +0100, Andrea Parri wrote:
> > On Wed, Jan 09, 2019 at 01:07:06PM -0800, Paul E. McKenney wrote:
> > > Hello!
> > >
> > > This series contains updat
rtesy of
> Luc Maranget. This needs an ack.
It seems that
1b52d0186177 ("tools/memory-model: Model smp_mb__after_unlock_lock()")
from linux-rcu/dev got lost; this also needs an ack (probably yours! ;D,
considered that, IIRC, yo
his_cpu)->nr_running == 1))
^
The implementation is just an RFC because it may have other drawbacks,
but I thought the second wakee of this specific example supposedly
should still do the idle core/ht balancing normally like before the
change.
Thanks,
Andrea
Hello Mike,
On Wed, Jan 09, 2019 at 05:19:48AM +0100, Mike Galbraith wrote:
> On Tue, 2019-01-08 at 22:49 -0500, Andrea Arcangeli wrote:
> > Hello,
> >
> > we noticed some unexpected performance regressions in the scheduler by
> > switching the guest CPU topology fro
On Wed, Jan 09, 2019 at 12:55:27PM +0100, Dmitry Vyukov wrote:
> On Wed, Jan 9, 2019 at 12:24 PM Andrea Parri
> wrote:
> >
> > On Tue, Jan 08, 2019 at 04:36:46PM -0800, Anatol Pomozov wrote:
> > > Hello
> > >
> > > On Tue, Jan 8, 2019 at 4:02 PM
On Tue, Jan 08, 2019 at 04:36:46PM -0800, Anatol Pomozov wrote:
> Hello
>
> On Tue, Jan 8, 2019 at 4:02 PM Andrea Parri
> wrote:
> >
> > Hi Anatol,
> >
> > On Tue, Jan 08, 2019 at 11:33:39AM -0800, Anatol Pomozov wrote:
> > > Hello folks,
> > &g
} else {
while (n--) {
write(pipe1[1], buf, 1);
read(pipe2[0], buf, 1);
}
}
return 0;
}
Andrea Arcangeli (1):
sched/fair: skip select_idle_sibling() in presence of sync wakeups
kernel/sched/fair.c | 13 +++
sed at 100%
utilization and that increases performance for those common workloads.
Signed-off-by: Andrea Arcangeli
---
kernel/sched/fair.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d1907506318a..b2ac152a6935 100644
--
take a page pin by
> migrating pages from CMA region. Marking the section PF_MEMALLOC_NOCMA ensures
> that we avoid uncessary page migration later.
>
> Suggested-by: Andrea Arcangeli
> Signed-off-by: Aneesh Kumar K.V
Reviewed-by: Andrea Arcangeli
reproduces it easily because it's an heavy user of
VM_FAULT_RETRY retvals.
Thanks,
Andrea
Andrea Arcangeli (1):
mm/hugetlb.c: teach follow_hugetlb_page() to handle FOLL_NOWAIT
mm/hugetlb.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
witch get_user_page_nowait() to
get_user_pages_unlocked()")
Signed-off-by: Andrea Arcangeli
Tested-by: "Dr. David Alan Gilbert"
Reported-by: "Dr. David Alan Gilbert"
---
mm/hugetlb.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/m
lready bailing on dax may also
later optionally need to avoid interfering with CMA.
Aside from the API detail above, this CMA page migration logic seems a
good solution for the problem.
Thanks,
Andrea
ead-safe manner.
Interesting! FYI, some LKMM's maintainers (Paul included) had and
continued to have some "fun" discussing topics related to "thread-
safe memory accesses": I'm sure that they'll be very interested in
such work of yours and eager to discuss your results.
Cheers,
Andrea
d add Cc:stable)
>
> (1) kprobe incorrct stacking order problem
>
> On recent talk with Andrea, I started more precise investigation on
> the kernel panic with kretprobes on notrace functions, which Francis
> had been reported last year ( https://lkml.org/lkml/2017/7/14/466 ).
&
On Mon, Jan 07, 2019 at 04:28:33PM -0500, Steven Rostedt wrote:
> On Mon, 7 Jan 2019 22:19:04 +0100
> Andrea Righi wrote:
>
> > > > If we put a kretprobe to raw_spin_lock_irqsave() it looks like
> > > > kretprobe is going to call kretprobe...
> > >
On Mon, Jan 07, 2019 at 02:59:18PM -0500, Steven Rostedt wrote:
> On Mon, 7 Jan 2019 20:52:09 +0100
> Andrea Righi wrote:
>
> > > Ug, kretprobe calls spinlocks in the callback? I wonder if we can
> > > remove them.
> > >
> > > I'm guessing this is a
On Mon, Jan 07, 2019 at 02:27:49PM -0500, Steven Rostedt wrote:
> On Mon, 7 Jan 2019 19:34:44 +0100
> Andrea Righi wrote:
>
> > On Mon, Jan 07, 2019 at 10:31:34PM +0900, Masami Hiramatsu wrote:
> > ...
> > > BTW, this is not all of issues. To remove CONFIG_KPRO
f CONFIG_INLINE_SPIN_UNLOCK_BH
@@ -200,6 +210,7 @@ void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock)
__raw_spin_unlock_bh(lock);
}
EXPORT_SYMBOL(_raw_spin_unlock_bh);
+NOKPROBE_SYMBOL(_raw_spin_unlock_bh);
#endif
#ifndef CONFIG_INLINE_READ_TRYLOCK
Signed-off-by: Andrea Righi
On Mon, Jan 07, 2019 at 10:31:34PM +0900, Masami Hiramatsu wrote:
> Hello,
>
> On recent talk with Andrea, I started more precise investigation on
> the kernel panic with kretprobes on notrace functions, which Francis
> had been reported last year ( https://lkml.org/lkm
4
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -32,6 +32,7 @@
#include
#include
#include
+#include
#include
#include
Thanks,
-Andrea
> __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *ignored, struct pt_reg
fd manager is practically always implemented as a
thread of some process so whatever alternative to dump_stack() should
work with threads.
I think kprobes/ebpf should work provided debug info is available
(which is not always guaranteed), otherwise to obtain it without debug
info, it'd require a ftrace static trace point to use with the
function graph tracer I guess.
Thanks,
Andrea
blob/master/kernel/page_mapped_crash/repro.c
>
> Fix the loop to iterate for "1 << compound_order" pages.
>
> Debugged-by: Laszlo Ersek
> Suggested-by: "Kirill A. Shutemov"
> Signed-off-by: Jan Stancek
> ---
> mm/util.c | 2 +-
> 1 file change
's iterating over all
> > symbols in x86's arch_populate_kprobe_blacklist(), but it seems to work
> > for my specific use case, so I thought it shouldn't be bad to share it,
> > just in case (maybe someone else is also interested).
>
> Hmm, but in that case, it limits other native kprobes users like systemtap
> to disable probing on notrace functions with no reasons. That may not be
> acceptable.
True...
>
> OK, I'll retry to find which notrace function combination tracing with
> kprobes are problematic. Let me do it...
OK. Thanks tons for looking into this!
-Andrea
/01/2011
> > > Workqueue: events_power_efficient neigh_periodic_work
> > > Call Trace:
> > > __dump_stack lib/dump_stack.c:77 [inline]
> > > dump_stack+0x244/0x39d lib/dump_stack.c:113
> > > print_address_description.cold.4+0x9/0x1ff mm/kasan/report.c:187
> > > kasan_report.cold.5+0x1b/0x39 mm/kasan/report.c:317
> > > __asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:135
> > > __list_del_entry_valid+0xf1/0x100 lib/list_debug.c:51
> > > __list_del_entry include/linux/list.h:117 [inline]
> > > list_del_init include/linux/list.h:159 [inline]
> > > neigh_mark_dead+0x13b/0x410 net/core/neighbour.c:125
The real crash seems to be completely unrelated to userfaultfd, the
list_del_init is in neigh_mark_dead.
if (!list_empty(>gc_list)) {
list_del_init(>gc_list);
Thanks,
Andrea
On Tue, Dec 18, 2018 at 06:24:35PM +0100, Andrea Righi wrote:
> On Tue, Dec 18, 2018 at 01:50:26PM +0900, Masami Hiramatsu wrote:
> ...
> > > Side question: there are certain symbols in arch/x86/xen that should be
> > > blacklisted explicitly, because they're non-attac
ore realistically
mmap the same glibc library 257 times in a row, so if something KSM is
now less of a concern for occasional page_referenced worst case
latencies, than all the rest of the page types.
KSM by enforcing the max sharing is now the most RMAP walk
computational complexity friendly of all the page types out there. So
there's no need to threat it specially at low priority reclaim scans.
Thanks,
Andrea
rg/pub/scm/linux/kernel/git/peterz/queue.git/commit/?h=locking/core=73685b8af253cf32b1b41b3045f2828c6fb2439e
with a modified changelog and my Reviewed-by (that I confirm).
I can't tell how/when this is going to be upstreamed (guess via -tip),
Peter?
Andrea
>
> ---
> kernel/ex
on the specific i.MX model
being addressed.
Signed-off-by: Andrea Barisani
---
drivers/nvmem/imx-ocotp.c | 49 ++--
1 file changed, 41 insertions(+), 8 deletions(-)
--- linux-4.14.89/drivers/nvmem/imx-ocotp.c.orig2018-12-18
10:30:49.363322853 +0100
+++ linux-4.14.89
on the specific i.MX model
being addressed.
Signed-off-by: Andrea Barisani
---
drivers/nvmem/imx-ocotp.c | 49 ++--
1 file changed, 41 insertions(+), 8 deletions(-)
diff -up linux-4.19.9/drivers/nvmem/imx-ocotp.c.orig
linux-4.19.9/drivers/nvmem/imx-ocotp.c
--- linux-4.19.9
"kprobe_events".
> (Those are used to same, but since the above commit, those are different now)
>
> I think the most sane solution is, identifying which (combination of)
> functions
> in ftrace (kernel/trace/*) causes a problem, marking those NOKPROBE_SYMBOL()
> and
> removing CONFIG_KPROBE_EVENTS_ON_NOTRACE.
OK. Thanks for the clarification!
-Andrea
ch_populate_kprobe_blacklist()
> so that user can get the correct kprobe blacklist in debugfs.
>
> Thank you,
Looks good to me. Thanks!
Tested-by: Andrea Righi
Side question: there are certain symbols in arch/x86/xen that should be
blacklisted explicitly, because they're non-attachable
On Thu, Dec 13, 2018 at 11:13:58AM +0100, Roman Penyaev wrote:
> On 2018-12-12 18:13, Andrea Parri wrote:
> > On Wed, Dec 12, 2018 at 12:03:57PM +0100, Roman Penyaev wrote:
>
> [...]
>
> > > +static inline void list_add_tail_loc
de is truly OOM. The fallback, despite very
inefficient, will still happen without OOM killer triggering.
Thanks,
Andrea
mplete the constraint of rcuwait_wake_up().
>
> Signed-off-by: Prateek Sood
I know this is going to sound ridiculous (coming from me or from
the Italian that I am), but it looks like we could both work on
our English. ;-)
But the fix seems correct to me:
Reviewed-by: Andrea Parri
It mig
hose changes.
>
> It also revises the statement of the RCU Guarantee to a more accurate
> form, and it adds a short paragraph mentioning the new support for SRCU.
>
> Signed-off-by: Alan Stern
> Cc: Akira Yokosawa
> Cc: Andrea Parri
> Cc: Boqun Feng
> Cc: Daniel Lustig
> C
SR. AMD plans support
for this MSR and access to this bit for all future processors."
I could not find similar information in the AMD APM though; Section 7.6.4
("Serializing Instructions") of this manual describe a different/stronger
notion of "serialization", IIUC.
>
> At some point in the past (when all this spectre LFENCE muck was
> relatively fresh) I suggested we call the thing: instruction_fence() or
> something like that, maybe we ought to still do that now.
FWIW, I do find the name rdtsc_ordered() as somehow too evocative... ;-)
maybe simply rdtsc_nospec() would be a better choice?
Andrea
>
> Re RDTSC, waiting for all preceding instructions to complete is
> 'obviously' sufficient for two RDTSC instructions not to get re-ordered
> either.
remote 4k. Ignoring remote THP isn't the purpose of
MADV_HUGEPAGE, quite the contrary.
Thanks,
Andrea
Commit-ID: 80eb865768703c0f85a0603762742ae1dedf21f0
Gitweb: https://git.kernel.org/tip/80eb865768703c0f85a0603762742ae1dedf21f0
Author: Andrea Parri
AuthorDate: Tue, 27 Nov 2018 12:01:10 +0100
Committer: Ingo Molnar
CommitDate: Tue, 11 Dec 2018 14:54:57 +0100
sched/fair: Clean up
e should not generate the remap event, and at the same
> > time we should clear all the uffd flags on the new VMA. Without
> > this patch, we can still have the VM_UFFD_MISSING|VM_UFFD_WP
> > flags on the new VMA even the fault handling process does not
> > even know the exista
Blacklist symbols in Xen probe-prohibited areas, so that user can see
these prohibited symbols in debugfs.
See also: a50480cb6d61.
Signed-off-by: Andrea Righi
---
arch/x86/xen/xen-asm_64.S | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64
may take hours. The idea
was to have MADV_HUGEPAGE provide THP without having to wait for
khugepaged to catch up with it.
Thanks,
Andrea
=
/*
* numa-thp-bench.c
*
* Copyright (C) 2018 Red Hat, Inc.
*
* This work is licensed under the terms of the GNU GPL, version 2.
*/
#include
#i
On Sat, Dec 08, 2018 at 12:48:59PM +0900, Masami Hiramatsu wrote:
> On Fri, 7 Dec 2018 18:58:05 +0100
> Andrea Righi wrote:
>
> > On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> > > Hi Andrea and Ingo,
> > >
> > > Here is the pat
On Sat, Dec 08, 2018 at 12:48:59PM +0900, Masami Hiramatsu wrote:
> On Fri, 7 Dec 2018 18:58:05 +0100
> Andrea Righi wrote:
>
> > On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> > > Hi Andrea and Ingo,
> > >
> > > Here is the pat
On Sat, Dec 08, 2018 at 12:42:10PM +0900, Masami Hiramatsu wrote:
> On Fri, 7 Dec 2018 18:00:26 +0100
> Andrea Righi wrote:
>
> > On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> > > Hi Andrea and Ingo,
> > >
> > > Here is the pat
On Sat, Dec 08, 2018 at 12:42:10PM +0900, Masami Hiramatsu wrote:
> On Fri, 7 Dec 2018 18:00:26 +0100
> Andrea Righi wrote:
>
> > On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> > > Hi Andrea and Ingo,
> > >
> > > Here is the pat
On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> Hi Andrea and Ingo,
>
> Here is the patch what I meant. I just ran it on qemu-x86, and seemed working.
> After introducing this patch, I will start adding
> arch_populate_kprobe_blacklist()
> to some arch
On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> Hi Andrea and Ingo,
>
> Here is the patch what I meant. I just ran it on qemu-x86, and seemed working.
> After introducing this patch, I will start adding
> arch_populate_kprobe_blacklist()
> to some arch
On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> Hi Andrea and Ingo,
>
> Here is the patch what I meant. I just ran it on qemu-x86, and seemed working.
> After introducing this patch, I will start adding
> arch_populate_kprobe_blacklist()
> to some arch
On Sat, Dec 08, 2018 at 01:01:20AM +0900, Masami Hiramatsu wrote:
> Hi Andrea and Ingo,
>
> Here is the patch what I meant. I just ran it on qemu-x86, and seemed working.
> After introducing this patch, I will start adding
> arch_populate_kprobe_blacklist()
> to some arch
.
This should be applied on top of 29ec90660d68 ("userfaultfd:
shmem/hugetlbfs: only allow to register VM_MAYWRITE vmas") to shut off
the false positive warning.
Thanks,
Andrea
Andrea Arcangeli (1):
userfaultfd: check VM_MAYWRITE was set after verifying the uffd is
registered
fs/userfau
.
This should be applied on top of 29ec90660d68 ("userfaultfd:
shmem/hugetlbfs: only allow to register VM_MAYWRITE vmas") to shut off
the false positive warning.
Thanks,
Andrea
Andrea Arcangeli (1):
userfaultfd: check VM_MAYWRITE was set after verifying the uffd is
registered
fs/userfau
allow to register
VM_MAYWRITE vmas")
Reported-by: syzbot+06c7092e7d71218a2...@syzkaller.appspotmail.com
Signed-off-by: Andrea Arcangeli
---
fs/userfaultfd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index cd58939dc977..7a85e609f
allow to register
VM_MAYWRITE vmas")
Reported-by: syzbot+06c7092e7d71218a2...@syzkaller.appspotmail.com
Signed-off-by: Andrea Arcangeli
---
fs/userfaultfd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index cd58939dc977..7a85e609f
Commit-ID: a50480cb6d61d5c5fc13308479407b628b6bc1c5
Gitweb: https://git.kernel.org/tip/a50480cb6d61d5c5fc13308479407b628b6bc1c5
Author: Andrea Righi
AuthorDate: Thu, 6 Dec 2018 10:56:48 +0100
Committer: Ingo Molnar
CommitDate: Thu, 6 Dec 2018 16:52:03 +0100
kprobes/x86: Blacklist non
Commit-ID: a50480cb6d61d5c5fc13308479407b628b6bc1c5
Gitweb: https://git.kernel.org/tip/a50480cb6d61d5c5fc13308479407b628b6bc1c5
Author: Andrea Righi
AuthorDate: Thu, 6 Dec 2018 10:56:48 +0100
Committer: Ingo Molnar
CommitDate: Thu, 6 Dec 2018 16:52:03 +0100
kprobes/x86: Blacklist non
got to Cc the maintainer of this file: doing it
now. The same consideration would hold for 14/14.
Andrea
>
> There are various theories why it was done, but none of them seem as
> something that really require it today. nios2 uses kmalloc for module
> memory, but anyhow it does not change t
got to Cc the maintainer of this file: doing it
now. The same consideration would hold for 14/14.
Andrea
>
> There are various theories why it was done, but none of them seem as
> something that really require it today. nios2 uses kmalloc for module
> memory, but anyhow it does not change t
These interrupt functions are already non-attachable by kprobes.
Blacklist them explicitly so that they can show up in
/sys/kernel/debug/kprobes/blacklist and tools like BCC can use this
additional information.
Signed-off-by: Andrea Righi
---
arch/x86/entry/entry_64.S | 4
1 file changed
These interrupt functions are already non-attachable by kprobes.
Blacklist them explicitly so that they can show up in
/sys/kernel/debug/kprobes/blacklist and tools like BCC can use this
additional information.
Signed-off-by: Andrea Righi
---
arch/x86/entry/entry_64.S | 4
1 file changed
On Wed, Dec 05, 2018 at 04:18:14PM -0800, David Rientjes wrote:
> On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
>
> > __GFP_COMPACT_ONLY gave an hope it could give some middle ground but
> > it shows awful compaction results, it basically destroys compaction
> > effec
On Wed, Dec 05, 2018 at 04:18:14PM -0800, David Rientjes wrote:
> On Wed, 5 Dec 2018, Andrea Arcangeli wrote:
>
> > __GFP_COMPACT_ONLY gave an hope it could give some middle ground but
> > it shows awful compaction results, it basically destroys compaction
> > effec
onds, which is an
> +# acceptable range.
> +#
> +# Why not calibrate an exact delay? Because within this initrd, we
> +# are restricted to Bourne-shell builtins, which as far as I know do not
> +# provide any means of obtaining a fine-grained timestamp.
> +
> +a4="a a
onds, which is an
> +# acceptable range.
> +#
> +# Why not calibrate an exact delay? Because within this initrd, we
> +# are restricted to Bourne-shell builtins, which as far as I know do not
> +# provide any means of obtaining a fine-grained timestamp.
> +
> +a4="a a
use transparent huge pages (THP)
when transparent_hugepage/enabled=madvise. Otherwise THP is only
used when it's enabled system wide.
Signed-off-by: Luiz Capitulino
Signed-off-by: Anthony Liguori
Signed-off-by: Andrea Arcangeli
---
exec.c | 1 +
osdep.h | 5 +
2 files changed, 6 i
use transparent huge pages (THP)
when transparent_hugepage/enabled=madvise. Otherwise THP is only
used when it's enabled system wide.
Signed-off-by: Luiz Capitulino
Signed-off-by: Anthony Liguori
Signed-off-by: Andrea Arcangeli
---
exec.c | 1 +
osdep.h | 5 +
2 files changed, 6 i
Hello,
On Wed, Dec 05, 2018 at 01:59:32PM -0800, David Rientjes wrote:
> [..] and the kernel test robot has reported, [..]
Just for completeness you may have missed one email:
https://lkml.kernel.org/r/87tvk1yjkp@yhuang-dev.intel.com
'So I think the report should have been a "performance
Hello,
On Wed, Dec 05, 2018 at 01:59:32PM -0800, David Rientjes wrote:
> [..] and the kernel test robot has reported, [..]
Just for completeness you may have missed one email:
https://lkml.kernel.org/r/87tvk1yjkp@yhuang-dev.intel.com
'So I think the report should have been a "performance
On Wed, Dec 05, 2018 at 02:03:10PM -0800, Linus Torvalds wrote:
> On Wed, Dec 5, 2018 at 12:40 PM Andrea Arcangeli wrote:
> >
> > So ultimately we decided that the saner behavior that gives the least
> > risk of regression for the short term, until we can do something
&g
On Wed, Dec 05, 2018 at 02:03:10PM -0800, Linus Torvalds wrote:
> On Wed, Dec 5, 2018 at 12:40 PM Andrea Arcangeli wrote:
> >
> > So ultimately we decided that the saner behavior that gives the least
> > risk of regression for the short term, until we can do something
&g
On Wed, Dec 05, 2018 at 09:15:28PM +0100, Michal Hocko wrote:
> If the __GFP_THISNODE should be really used then it should be applied to
> all other types of pages. Not only THP. And as such done in a separate
> patch. Not a part of the revert. The cleanup was meant to unify THP
> allocations and
On Wed, Dec 05, 2018 at 09:15:28PM +0100, Michal Hocko wrote:
> If the __GFP_THISNODE should be really used then it should be applied to
> all other types of pages. Not only THP. And as such done in a separate
> patch. Not a part of the revert. The cleanup was meant to unify THP
> allocations and
On Wed, Dec 05, 2018 at 11:49:26AM -0800, David Rientjes wrote:
> High thp utilization is not always better, especially when those hugepages
> are accessed remotely and introduce the regressions that I've reported.
> Seeking high thp utilization at all costs is not the goal if it causes
>
On Wed, Dec 05, 2018 at 11:49:26AM -0800, David Rientjes wrote:
> High thp utilization is not always better, especially when those hugepages
> are accessed remotely and introduce the regressions that I've reported.
> Seeking high thp utilization at all costs is not the goal if it causes
>
er than reverting the revert, as the
swap storms are a showstopper compared to crippling down compaction
ability to compact memory when all nodes are full of cache.
Thanks,
Andrea
er than reverting the revert, as the
swap storms are a showstopper compared to crippling down compaction
ability to compact memory when all nodes are full of cache.
Thanks,
Andrea
t; At least code pattern looks similar.
Maybe add a comment on top of (your) xchg() to note/justify these memory
ordering requirements? As Paul said: "if there are races, this would
help force them to happen" (and simplify the review, this/future).
Andrea
t; At least code pattern looks similar.
Maybe add a comment on top of (your) xchg() to note/justify these memory
ordering requirements? As Paul said: "if there are races, this would
help force them to happen" (and simplify the review, this/future).
Andrea
Commit-ID: 4607abbcf464ea2be14da444215d05c73025cf6e
Gitweb: https://git.kernel.org/tip/4607abbcf464ea2be14da444215d05c73025cf6e
Author: Andrea Parri
AuthorDate: Mon, 3 Dec 2018 15:04:49 -0800
Committer: Ingo Molnar
CommitDate: Tue, 4 Dec 2018 07:29:51 +0100
tools/memory-model: Model
Commit-ID: 4607abbcf464ea2be14da444215d05c73025cf6e
Gitweb: https://git.kernel.org/tip/4607abbcf464ea2be14da444215d05c73025cf6e
Author: Andrea Parri
AuthorDate: Mon, 3 Dec 2018 15:04:49 -0800
Committer: Ingo Molnar
CommitDate: Tue, 4 Dec 2018 07:29:51 +0100
tools/memory-model: Model
y said that a better THP locality needs more work and during
> > the review discussion I have even volunteered to work on that.
>
> We have two known patches that seem to have no real downsides.
>
> One is the patch posted by Andrea earlier in this thread, which seems
> to
y said that a better THP locality needs more work and during
> > the review discussion I have even volunteered to work on that.
>
> We have two known patches that seem to have no real downsides.
>
> One is the patch posted by Andrea earlier in this thread, which seems
> to
tion that could satisfy everyone because it does have
some drawback too.
Thanks,
Andrea
tion that could satisfy everyone because it does have
some drawback too.
Thanks,
Andrea
Signed-off-by: Andrea Parri
Cc: Palmer Dabbelt
Cc: Albert Ou
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
---
TBH, the delay was not intentional: I've just become aware of it while
working on moving riscv over to queued rwlocks. There's currently one
callsite for the non-fully-ordere
Signed-off-by: Andrea Parri
Cc: Palmer Dabbelt
Cc: Albert Ou
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
---
TBH, the delay was not intentional: I've just become aware of it while
working on moving riscv over to queued rwlocks. There's currently one
callsite for the non-fully-ordere
/* in wake_q_add() */
WRITE_ONCE(*y, 1); /* wake_cond = true */
smp_mb__before_atomic();
r1 = cmpxchg_relaxed(next, 1, 2);
}
exists (0:r0=0 /\ 1:r1=0)
This "exists" clause cannot be satisfied according to the LKMM:
Test wake_up_q-wake_q_add Allowe
/* in wake_q_add() */
WRITE_ONCE(*y, 1); /* wake_cond = true */
smp_mb__before_atomic();
r1 = cmpxchg_relaxed(next, 1, 2);
}
exists (0:r0=0 /\ 1:r1=0)
This "exists" clause cannot be satisfied according to the LKMM:
Test wake_up_q-wake_q_add Allowe
vious behavior
without introducing new features and new APIs, that would also have
the side effect of diminishing THP utilization under MADV_HUGEPAGE.
Thanks!
Andrea
vious behavior
without introducing new features and new APIs, that would also have
the side effect of diminishing THP utilization under MADV_HUGEPAGE.
Thanks!
Andrea
701 - 800 of 5733 matches
Mail list logo