Improve code readability by moving the BPF JIT function epilogue
generation code to a dedicated emit_epilogue() function, analagous to
the existing emit_prologue() function.
Signed-off-by: Josh Poimboeuf
Acked-by: Peter Zijlstra (Intel)
---
arch/x86/net/bpf_jit_comp.c | 37
On Fri, Jun 14, 2019 at 01:58:21PM +, David Laight wrote:
> From: Josh Poimboeuf
> > Sent: 14 June 2019 14:44
> >
> > On Fri, Jun 14, 2019 at 10:50:23AM +, David Laight wrote:
> > > On Thu, Jun 13, 2019 at 08:21:03AM -0500, Josh Poimboeuf wrote:
> &g
:
> > > > On Thu, Jun 13, 2019 at 08:20:30PM -0500, Josh Poimboeuf wrote:
> > > > > On Thu, Jun 13, 2019 at 01:57:11PM -0700, Alexei Starovoitov wrote:
> > > >
> > > > > > and to patches 8 and 9.
> > > > >
> > > > >
On Fri, Jun 14, 2019 at 08:31:53AM -0700, Alexei Starovoitov wrote:
> On Fri, Jun 14, 2019 at 6:34 AM Josh Poimboeuf wrote:
> >
> > On Thu, Jun 13, 2019 at 11:00:09PM -0700, Alexei Starovoitov wrote:
> > > > + if (src_reg == BPF_REG_FP) {
> > > > +
On Fri, Jun 14, 2019 at 10:50:23AM +, David Laight wrote:
> On Thu, Jun 13, 2019 at 08:21:03AM -0500, Josh Poimboeuf wrote:
> > The BPF JIT code clobbers RBP. This breaks frame pointer convention and
> > thus prevents the FP unwinder from unwinding through JIT generated code.
you mean tail calls? Or something else? For tail calls the stack is
shared and the stack layout is the same.
--
Josh
at we need to make the jited frame proper,
> > but unwinding need to start before any bpf stuff.
> > That's a bigger issue.
>
> I strongly disagree, we should be able to unwind through bpf.
--
Josh
On Thu, Jun 13, 2019 at 09:28:48PM -0500, Josh Poimboeuf wrote:
> On Thu, Jun 13, 2019 at 08:58:48PM -0500, Josh Poimboeuf wrote:
> > On Thu, Jun 13, 2019 at 06:42:45PM -0700, Alexei Starovoitov wrote:
> > > On Thu, Jun 13, 2019 at 08:30:51PM -0500, Josh Poimboeuf wrote:
>
On Thu, Jun 13, 2019 at 08:58:48PM -0500, Josh Poimboeuf wrote:
> On Thu, Jun 13, 2019 at 06:42:45PM -0700, Alexei Starovoitov wrote:
> > On Thu, Jun 13, 2019 at 08:30:51PM -0500, Josh Poimboeuf wrote:
> > > On Thu, Jun 13, 2019 at 03:00:55PM -0700, Alexei Starovoitov wrote:
&
On Thu, Jun 13, 2019 at 06:42:45PM -0700, Alexei Starovoitov wrote:
> On Thu, Jun 13, 2019 at 08:30:51PM -0500, Josh Poimboeuf wrote:
> > On Thu, Jun 13, 2019 at 03:00:55PM -0700, Alexei Starovoitov wrote:
> > > > @@ -392,8 +402,16 @@ bool unwind_next_frame(struc
On Thu, Jun 13, 2019 at 06:39:05PM -0700, Alexei Starovoitov wrote:
> On Thu, Jun 13, 2019 at 08:22:48PM -0500, Josh Poimboeuf wrote:
> > On Thu, Jun 13, 2019 at 02:58:09PM -0700, Alexei Starovoitov wrote:
> > > On Thu, Jun 13, 2019 at 08:21:03AM -0500, Josh Poimboeuf wrote:
&
On Thu, Jun 13, 2019 at 06:37:21PM -0700, Alexei Starovoitov wrote:
> On Thu, Jun 13, 2019 at 08:20:30PM -0500, Josh Poimboeuf wrote:
> > On Thu, Jun 13, 2019 at 01:57:11PM -0700, Alexei Starovoitov wrote:
> > > On Thu, Jun 13, 2019 at 08:20:59AM -0500, Josh Poimboeuf wrot
would need BPF-specific knowledge, unless we
created some generic abstraction for generated code to register their
functions (which we have actually considered in the past). But the
above approach is much simpler: just have all generated code use frame
pointers.
--
Josh
On Thu, Jun 13, 2019 at 02:58:09PM -0700, Alexei Starovoitov wrote:
> On Thu, Jun 13, 2019 at 08:21:03AM -0500, Josh Poimboeuf wrote:
> > The BPF JIT code clobbers RBP. This breaks frame pointer convention and
> > thus prevents the FP unwinder from unwinding through JIT
On Thu, Jun 13, 2019 at 01:57:11PM -0700, Alexei Starovoitov wrote:
> On Thu, Jun 13, 2019 at 08:20:59AM -0500, Josh Poimboeuf wrote:
> > Objtool currently ignores ___bpf_prog_run() because it doesn't
> > understand the jump table. This results in the ORC unwinder not bein
'after_init' argument and instead make __module_enable_ro()
smart enough to only frob the __ro_after_init section after the module
has gone live.
Reported-by: Petr Mladek
Signed-off-by: Josh Poimboeuf
---
arch/arm64/kernel/ftrace.c | 2 +-
include/linux/module.h | 4 ++
Patch 1 fixes a module loading race between livepatch and ftrace.
Patch 2 adds lockdep assertions assocated with patch 1.
Patch 3 fixes a theoretical bug in the module __ro_after_init section
handling.
Josh Poimboeuf (3):
module: Fix livepatch/ftrace module text permissions race
module: Add
External callers of the module page attribute change functions now need
to have the text_mutex. Enforce that with lockdep assertions.
Signed-off-by: Josh Poimboeuf
---
kernel/module.c | 27 +--
1 file changed, 21 insertions(+), 6 deletions(-)
diff --git a/kernel
missions changes -- are protected
by the text_mutex.
Reported-by: Johannes Erdfelt
Fixes: 444d13ff10fb ("modules: add ro_after_init support")
Signed-off-by: Josh Poimboeuf
Acked-by: Jessica Yu
Reviewed-by: Petr Mladek
Reviewed-by: Miroslav Benes
---
kernel/livepatch/core.c | 6 +++
On Thu, Jun 13, 2019 at 05:38:04PM -0400, Steven Rostedt wrote:
> On Fri, 31 May 2019 17:25:27 -0500
> Josh Poimboeuf wrote:
>
> > On Fri, May 31, 2019 at 02:12:56PM -0500, Josh Poimboeuf wrote:
> > > > Anyway, the above is a separate problem. This patch looks
&g
On Wed, Jun 12, 2019 at 09:50:08AM -0500, Josh Poimboeuf wrote:
> > Other than that, the same note as before, the 32bit JIT still seems
> > buggered, but I'm not sure you (or anybody else) cares enough about that
> > to fix it though. It seems to use ebp as its o
On Thu, Jun 13, 2019 at 06:57:10PM +, Song Liu wrote:
>
>
> > On Jun 13, 2019, at 6:21 AM, Josh Poimboeuf wrote:
> >
> > Improve code readability by moving the BPF JIT function epilogue
> > generation code to a dedicated emit_epilogue() function, analagous to
elf64-x86-64
Disassembly of section .text:
<.text>:
0: 48 b8 11 11 11 11 11movabs $0x,%rax
7: 11 11 11
--
Josh
On Thu, Jun 13, 2019 at 04:55:31PM +0100, Raphael Gault wrote:
> Hi Josh,
>
> On 5/28/19 11:24 PM, Josh Poimboeuf wrote:
> > On Tue, May 21, 2019 at 12:50:57PM +, Raphael Gault wrote:
> > > Hi Josh,
> > >
> > > Thanks for offering your help and
callchains work without
CONFIG_FRAME_POINTER")
Signed-off-by: Josh Poimboeuf
---
arch/x86/events/core.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index f0e4804515d8..6a7cfcadfc1e 100644
--- a/arch/x86/events/
neric support for reading any static local jump table array named
"jump_table", and rename the BPF variable accordingly, so objtool can
generate ORC data for ___bpf_prog_run().
Fixes: d15d356887e7 ("perf/x86: Make perf callchains work without
CONFIG_FRAME_POINTER")
Reported-by: Song
inder (building on patch 6).
- Patches 8-9 are some readability cleanups.
Josh Poimboeuf (8):
objtool: Fix ORC unwinding in non-JIT BPF generated code
x86/bpf: Move epilogue generation to a dedicated function
x86/bpf: Simplify prologue generation
x86/bpf: Support SIB byte generation
Improve code readability by moving the BPF JIT function epilogue
generation code to a dedicated emit_epilogue() function, analagous to
the existing emit_prologue() function.
Signed-off-by: Josh Poimboeuf
---
arch/x86/net/bpf_jit_comp.c | 37 -
1 file changed
n the
prologue. So remove those instructions for now.
Signed-off-by: Josh Poimboeuf
---
arch/x86/net/bpf_jit_comp.c | 100 +---
1 file changed, 47 insertions(+), 53 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index da8c988
In preparation for using R12 indexing instructions in BPF JIT code, add
support for generating the x86 SIB byte.
Signed-off-by: Josh Poimboeuf
---
arch/x86/net/bpf_jit_comp.c | 69 +
1 file changed, 54 insertions(+), 15 deletions(-)
diff --git a/arch/x86/net
then this will allow ORC to unwind through most
generated code despite there being no corresponding ORC entries.
Fixes: d15d356887e7 ("perf/x86: Make perf callchains work without
CONFIG_FRAME_POINTER")
Reported-by: Song Liu
Signed-off-by: Josh Poimboeuf
---
arch/x86/kern
ister. Change it to use R12 instead.
Fixes: d15d356887e7 ("perf/x86: Make perf callchains work without
CONFIG_FRAME_POINTER")
Reported-by: Song Liu
Signed-off-by: Josh Poimboeuf
---
arch/x86/net/bpf_jit_comp.c | 43 +
1 file changed, 25 in
Now that the comments have been converted to AT&T syntax, swap the order
of the src/dst arguments in the MOV-related functions and macros to
match the ordering of AT&T syntax.
Signed-off-by: Josh Poimboeuf
---
arch/x86/net/bpf_jit_comp.c | 44 ++---
Convert the BPF JIT assembly comments to AT&T syntax to reduce
confusion. AT&T syntax is the default standard, used throughout Linux
and by the GNU assembler.
Signed-off-by: Josh Poimboeuf
---
arch/x86/net/bpf_jit_comp.c | 156 ++--
1 file changed, 78 in
5ed85f4-bf2e-da91-71c1-46875d1c6...@infradead.org
I still can't reproduce it, and I still don't understand it...
--
Josh
On Wed, Jun 12, 2019 at 10:54:23AM +0200, Peter Zijlstra wrote:
> On Tue, Jun 11, 2019 at 10:05:01PM -0500, Josh Poimboeuf wrote:
> > On Fri, May 24, 2019 at 10:53:19AM +0200, Peter Zijlstra wrote:
> > > > For ORC, I'm thinking we may be able to just require that all g
On Wed, Jun 12, 2019 at 09:10:23AM -0400, Steven Rostedt wrote:
> On Tue, 11 Jun 2019 22:05:01 -0500
> Josh Poimboeuf wrote:
>
> > Right now, ftrace has a special hook in the ORC unwinder
> > (orc_ftrace_find). It would be great if we could get rid of that in
> > fav
me pointers behind in CONFIG_FRAME_POINTER-land forever...
Here are my latest BPF unwinder patches in case anybody wants a sneak
peek:
https://git.kernel.org/pub/scm/linux/kernel/git/jpoimboe/linux.git/log/?h=bpf-orc-fix
--
Josh
rivers/hwmon/smsc47m1.o: warning: objtool: fan_div_store()+0xb6: can't find
jump dest instruction at .text+0x93a
But I bet the root cause is the same.
This fixes it for me:
From: Josh Poimboeuf
Subject: [PATCH] hwmon/smsc47m1: Fix objtool warning caused by undefined
behavior
Objtool is rep
On Thu, Jun 06, 2019 at 04:04:48PM +, Song Liu wrote:
> Hi Josh,
>
> Have you got luck fixing the ORC side?
Here's the ORC fix. It's needed in addition to the bpf frame pointer
fix (the previous patch). I'll clean the patches up and post them soon.
diff
On Tue, Jun 11, 2019 at 10:29:31AM +0200, Peter Zijlstra wrote:
> On Mon, Jun 10, 2019 at 12:24:28PM -0500, Josh Poimboeuf wrote:
> > On Wed, Jun 05, 2019 at 03:08:07PM +0200, Peter Zijlstra wrote:
> > >
> > > Signed-off-by: Peter Zijlstra (Intel)
> > > ---
On Mon, Jun 10, 2019 at 06:45:52PM +, Nadav Amit wrote:
> > On Jun 10, 2019, at 11:33 AM, Josh Poimboeuf wrote:
> >
> > On Wed, Jun 05, 2019 at 03:08:06PM +0200, Peter Zijlstra wrote:
> >> --- a/arch/x86/include/asm/static_call.h
> >> +++ b/arch/x86/in
On Mon, Jun 10, 2019 at 06:33:26PM +, Nadav Amit wrote:
> > On Jun 10, 2019, at 10:19 AM, Josh Poimboeuf wrote:
> >
> > On Fri, Jun 07, 2019 at 10:37:56AM +0200, Peter Zijlstra wrote:
> >>>> +}
> >>>> +
> >>>>
rid of the above cruft, and instead just use the out-of-line
trampoline as the default for inline as well.
Then the inline case could fall back to the out-of-line implementation
(by patching the trampoline's jmp dest) before static_call_initialized
is set.
--
Josh
s on m
> + help
> + Test the static call interfaces.
> +
> + If unsure, say N.
> +
Any reason why we wouldn't just make this a built-in boot time test?
--
Josh
x27;m not seeing what static_call needs differently.
I forgot why I did this, but it's probably for the case where there's a
static call site in module init code. It deserves a comment.
Theoretically, jump labels need this to.
BTW, there's a change coming that will require the text_mutex before
calling module_{disable,enable}_ro().
--
Josh
his fits almost all text_poke_bp() users, except
> arch_unoptimize_kprobe() which restores random text, and for that site
> we have to build an explicit emulate instruction.
>
> Cc: Daniel Bristot de Oliveira
> Cc: Nadav Amit
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Josh Poimboeuf
--
Josh
uested-by: Andy Lutomirski
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Josh Poimboeuf
--
Josh
I recall writing some of this code (some of the kernel_stack_pointer
removal stuff) so please give me a shout-out ;-)
Otherwise:
Reviewed-by: Josh Poimboeuf
--
Josh
On Wed, Jun 05, 2019 at 03:07:57PM +0200, Peter Zijlstra wrote:
> When CONFIG_FRAME_POINTER, we should mark pt_regs frames.
>
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Josh Poimboeuf
--
Josh
nwinder; see
> unwind_frame.c:decode_frame_pointer().
>
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Josh Poimboeuf
--
Josh
ing to restore_all_kernel. Inline resume_kernel
> in restore_all_kernel and avoid the CONFIG_PREEMPT dependent label.
>
> Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Josh Poimboeuf
--
Josh
original %rbp
> value. Peter, could you check the above commit?
The unwinder knows how to decode the encoded frame pointer. So it can
find regs by decoding the new rbp value, and it also knows that regs->bp
is the original rbp value.
Reviewed-by: Josh Poimboeuf
--
Josh
hat. I also simplified
> >> the prologue to resemble a GCC prologue, which decreases the prologue
> >> size quite a bit.
> >>
> >> Next week I can work on the corresponding ORC change. Then I can clean
> >> all the patches up and submit them properly.
On Fri, May 31, 2019 at 02:12:56PM -0500, Josh Poimboeuf wrote:
> > Anyway, the above is a separate problem. This patch looks
> > fine for the original problem.
>
> Thanks for the review. I'll post another version, with the above
> changes and with the patches split
r Mladek
> Acked-by: Miroslav Benes
> Reviewed-by: Kamalesh Babulal
Acked-by: Josh Poimboeuf
--
Josh
> would have with the patch), because klp_check_stack() returns, but it
> prints out that a task has an unreliable stack. Yes, it is pr_debug() only
> in the end, but still.
>
> I don't have a preference and my understanding is that Petr does not want
> to do v4. I can prepare a patch, but it would be nice to choose now. Josh?
> Anyone else?
My vote would be #1 -- just revert 1d98a69e5cef.
--
Josh
save_stack_trace_tsk_reliable(), which is
> implemented in arch/powerpc/
> - all other archs do not have CONFIG_HAVE_RELIABLE_STACKTRACE and there is
> stack_trace_save_tsk_reliable() returning ENOSYS for these cases in
> include/linux/stacktrace.c
I think you're right. stack_trace_save_tsk_reliable() in stacktrace.h
returning -ENOSYS serves the same purpose as the old weak version of
save_stack_trace_tsk_reliable() which is no longer called directly.
--
Josh
On Thu, May 30, 2019 at 03:54:14PM +0200, Petr Mladek wrote:
> On Wed 2019-05-29 14:02:24, Josh Poimboeuf wrote:
> > The above panic occurs when loading two modules at the same time with
> > ftrace enabled, where at least one of the modules is a livepatch module
e variation would be:
objtool check [check opts] + orc generate [orc opts] + mcount record [mcount
opts] -- foo.o [bar.o]
or, just use '--' as a generic separator which can be used to separate
subcommands or file names.
objtool check [check opts] -- orc generate [orc opts] -- mcount record
[mcount opts] -- foo.o [bar.o]
I kind of like that. But I think any of these variations would probably
work.
--
Josh
oing, but use an INIT
IPI instead of HLT to make sure the CPU is completely dead.
That may be a theoretical improvement but we'd still need to do the
whole "wake and play dead" dance which Jiri's patch is doing for offline
CPUs. So Jiri's patch looks ok to me.
--
Josh
On Fri, May 31, 2019 at 05:41:18PM +0200, Jiri Kosina wrote:
> On Fri, 31 May 2019, Josh Poimboeuf wrote:
>
> > The only question I'd have is if we have data on the power savings
> > difference between hlt and mwait. mwait seems to wake up on a lot of
> > dif
ea to use INIT IPI, I wonder if that would
work with SMT siblings? Specifically I wonder about the Intel issue
that requires siblings to have CR4.MCE set.
--
Josh
On Fri, May 31, 2019 at 01:42:02AM +0200, Jiri Kosina wrote:
> On Thu, 30 May 2019, Josh Poimboeuf wrote:
>
> > > > Reviewed-by: Thomas Gleixner
> > >
> > > Yes, it is, thanks!
> >
> > I still think changing monitor/mwait to use a fixmap add
e
> > > hibernate core changes,
> >
> > Ok.
> >
> > > so can I get an ACK from the x86 arch side here, please?
> >
> > No. Is the following good enough?
> >
> > Reviewed-by: Thomas Gleixner
>
> Yes, it is, thanks!
I still think changing monitor/mwait to use a fixmap address would be a
much cleaner way to fix this. I can try to work up a patch tomorrow.
--
Josh
missions changes -- are protected
by the text_mutex.
Reported-by: Johannes Erdfelt
Signed-off-by: Josh Poimboeuf
---
kernel/livepatch/core.c | 6 ++
kernel/module.c | 21 ++---
kernel/trace/ftrace.c | 10 +-
3 files changed, 33 insertions(+), 4 deletions(-)
dif
On Wed, May 29, 2019 at 07:29:04PM +0200, Jessica Yu wrote:
> +++ Josh Poimboeuf [21/05/19 11:42 -0500]:
> > On Tue, May 21, 2019 at 10:42:04AM -0400, Steven Rostedt wrote:
> > > On Tue, 21 May 2019 09:16:29 -0500
> > > Josh Poimboeuf wrote:
> > >
> > &g
On Wed, May 29, 2019 at 06:26:59PM +0200, Jiri Kosina wrote:
> On Wed, 29 May 2019, Josh Poimboeuf wrote:
>
> > hibernation_restore() is called by user space at runtime, via ioctl or
> > sysfs. So I think this still doesn't fix the case where you've disabled
> &
wait_play_dead() to instead just monitor a
fixmap address which doesn't change for kaslr?
Is there are reason why maxcpus= doesn't do the CR4.MCE booted_once
dance?
--
Josh
On Wed, May 29, 2019 at 03:41:52PM +0200, Peter Zijlstra wrote:
> On Tue, May 28, 2019 at 09:43:28AM -0500, Josh Poimboeuf wrote:
> > Would it be feasible to eventually combine subcommands so that objtool
> > could do both ORC and mcount generation in a single invocation? I
>
On Wed, May 29, 2019 at 08:06:48AM -0400, Steven Rostedt wrote:
> On Wed, 29 May 2019 13:17:21 +0200 (CEST)
> Jiri Kosina wrote:
>
> > > > From: Josh Poimboeuf
> > > > Subject: [PATCH] livepatch: Fix ftrace module text permissions race
> > >
> &
On Tue, May 21, 2019 at 12:50:57PM +, Raphael Gault wrote:
> Hi Josh,
>
> Thanks for offering your help and sorry for the late answer.
>
> My understanding is that a table of offsets is built by GCC, those
> offsets being scaled by 4 before adding them to the base labe
objtool/Build
> +++ b/tools/objtool/Build
> @@ -1,6 +1,7 @@
> objtool-y += arch/$(SRCARCH)/
> objtool-y += builtin-check.o
> objtool-y += builtin-orc.o
> +objtool-$(BUILD_C_RECORDMCOUNT) += builtin-mcount.o recordmcount.o
Can we just build these files unconditionally, even if they're not used?
Thus far, objtool doesn't have any kernel config dependencies like this.
It helps keep things simple and keeps objtool more separate from the
kernel.
So if you build record mcount unconditionally then I think you can also
get rid of the BUILD_C_RECORDMCOUNT export, the CMD_MCOUNT define, and
cmd_nop().
--
Josh
mcount.c (78%)
> rename {scripts => tools/objtool}/recordmcount.h (78%)
> rename {scripts => tools/objtool}/recordmcount.pl (100%)
>
> --
> 2.20.1
>
--
Josh
On Fri, May 24, 2019 at 10:20:52AM +0800, Kairui Song wrote:
> On Fri, May 24, 2019 at 1:27 AM Josh Poimboeuf wrote:
> >
> > On Fri, May 24, 2019 at 12:41:59AM +0800, Kairui Song wrote:
> > > On Thu, May 23, 2019 at 11:24 PM Josh Poimboeuf
> > > wrote:
> &
tool and in the decompressor.
>
> [0]
> https://lore.kernel.org/linux-arm-kernel/20190522150239.19314-1-ard.biesheu...@arm.com
>
> This patch plus [0] build and boot tested with x86_64_defconfig on QEMU/kvm +
> OVMF.
NACK based on
https://lkml.kernel.org/r/f2141ee5-d07a-6dd9-47c6-97e8fbdcc...@arm.com
--
Josh
On Fri, May 24, 2019 at 05:55:37PM +0200, Ard Biesheuvel wrote:
> On Fri, 24 May 2019 at 17:21, Josh Poimboeuf wrote:
> >
> > On Thu, May 23, 2019 at 10:29:39AM +0100, Ard Biesheuvel wrote:
> > >
> > >
> > > On 5/23/19 10:18 AM, Will Deacon wrote:
> &
o begin with, and the
> only reason we enabled it by default at the time was to ensure that the PLT
> code got some coverage after we introduced it.
In code, percpu variables are accessed with absolute relocations, right?
Before I read your 3rd act, I was wondering if it would make sense to do
the same with the ksymtab relocations.
Like if we somehow [ insert much hand waving ] ensured that everybody
uses EXPORT_PER_CPU_SYMBOL() for percpu symbols instead of just
EXPORT_SYMBOL() then we could use a different macro to create the
ksymtab relocations for percpu variables, such that they use absolute
relocations.
Just an idea. Maybe the point is moot now.
--
Josh
On Fri, May 24, 2019 at 10:53:19AM +0200, Peter Zijlstra wrote:
> On Thu, May 23, 2019 at 10:24:13AM -0500, Josh Poimboeuf wrote:
>
> > Here's the latest version which should fix it in all cases (based on
> > tip/master):
> >
> >
> > https://git.ker
On Fri, May 24, 2019 at 12:41:59AM +0800, Kairui Song wrote:
> On Thu, May 23, 2019 at 11:24 PM Josh Poimboeuf wrote:
> >
> > On Thu, May 23, 2019 at 10:50:24PM +0800, Kairui Song wrote:
> > > > > Hi Josh, this still won't fix the problem.
> > > >
On Thu, May 23, 2019 at 10:50:24PM +0800, Kairui Song wrote:
> > > Hi Josh, this still won't fix the problem.
> > >
> > > Problem is not (or not only) with ___bpf_prog_run, what actually went
> > > wrong is with the JITed bpf code.
> >
> > T
On Thu, May 23, 2019 at 02:48:11PM +0800, Kairui Song wrote:
> On Thu, May 23, 2019 at 7:46 AM Josh Poimboeuf wrote:
> >
> > On Wed, May 22, 2019 at 12:45:17PM -0500, Josh Poimboeuf wrote:
> > > On Wed, May 22, 2019 at 02:49:07PM +, Alexei Starovoitov wrote:
> >
On Wed, May 22, 2019 at 12:45:17PM -0500, Josh Poimboeuf wrote:
> On Wed, May 22, 2019 at 02:49:07PM +, Alexei Starovoitov wrote:
> > The one that is broken is prog_tests/stacktrace_map.c
> > There we attach bpf to standard tracepoint where
> > kernel suppose to collect p
On Tue, May 21, 2019 at 12:50:57PM +, Raphael Gault wrote:
> Hi Josh,
>
> Thanks for offering your help and sorry for the late answer.
>
> My understanding is that a table of offsets is built by GCC, those
> offsets being scaled by 4 before adding them to the base labe
e0)
> [ 160.460312] c9e4: aa1f48d1 (trace_call_bpf+0x81/0x100)
> [ 160.460313] c5d8ebd1: b89d00c6bcc0 (0xb89d00c6bcc0)
> [ 160.460315] bce0b072: ab651be0
> (event_sched_migrate_task+0xa0/0xa0)
> [ 160.460316] 355cf319: ...
> [ 160.460316] 3b67f2ad: d89cffc3ae80 (0xd89cffc3ae80)
> [ 160.460316] 9a77e20b: 9ce3fba25000 (0x9ce3fba25000)
> [ 160.460317] 32cf7376: 0001 (0x1)
> [ 160.460317] 0051db74: b89d00c6bd20 (0xb89d00c6bd20)
> [ 160.460318] 40eb3bf7: aa22be81
> (perf_trace_run_bpf_submit+0x41/0xb0)
Is there an easy way to recreate this?
--
Josh
gnores that function because it can't follow the jump table.
--
Josh
On Tue, May 21, 2019 at 11:42:27AM -0500, Josh Poimboeuf wrote:
> void module_enable_ro(const struct module *mod, bool after_init)
> {
> + lockdep_assert_held(&text_mutex);
> +
This assertion fails, it turns out the module code also calls this
function (oops). I may move
ions/alternatives/paravirt?
Yeah, technically there shouldn't be a need to do the frobbing unless
there are .klp.rela or .klp.arch sections for the given object. Though
I'm not sure it really matters all that much since loading a livepatch
is a pretty rare event.
--
Josh
On Tue, May 21, 2019 at 10:42:04AM -0400, Steven Rostedt wrote:
> On Tue, 21 May 2019 09:16:29 -0500
> Josh Poimboeuf wrote:
>
> > > Hmm, this may blow up with lockdep, as I believe we already have a
> > > locking dependency of:
> > >
> > > text_mu
On Mon, May 20, 2019 at 05:39:10PM -0400, Steven Rostedt wrote:
> On Mon, 20 May 2019 16:19:31 -0500
> Josh Poimboeuf wrote:
>
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index a12aff849c04..8259d4ba8b00 100644
> > --- a/kernel/trace/ftra
On Mon, May 20, 2019 at 02:09:05PM -0700, Johannes Erdfelt wrote:
> On Mon, May 20, 2019, Joe Lawrence wrote:
> > [ fixed jeyu's email address ]
>
> Thank you, the bounce message made it seem like my mail server was
> blocked and not that the address didn't exist.
>
> I think MAINTAINERS needs a
e
> array.
>
> In the case of the above config and trace, be sure to return the
> stacktrace_cookie.len on stack_trace_save_tsk_reliable() success.
>
> Fixes: 25e39e32b0a3f ("livepatch: Simplify stack trace retrieval")
> Reported-by: Miroslav Benes
> Signed-off-by: Joe Lawrence
It's great to see the livepatch selftests working and finding
regressions.
Acked-by: Josh Poimboeuf
--
Josh
n with modified stack frame
>
> AFAIK those are non-critical, i.e. stack traces may be wrong (or not),
> but it does not mean the generated kernel itself is wrong. CC'ing the
> objtool maintainers too.
I don't think I recognize those warnings. Do you also see them in the
upstream kernel?
--
Josh
/issues/481
Signed-off-by: Nathan Chancellor
Reviewed-by: Nick Desaulniers
Reviewed-by: Mukesh Ojha
Signed-off-by: Josh Poimboeuf
---
tools/objtool/Makefile | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index 53f8be0f4a1f
MT;
> + else
> + pr_crit("Unsupported mitigations=%s, system may still be
> vulnerable\n",
> + arg);
>
> return 0;
> }
> --
> 2.17.1
>
Acked-by: Josh Poimboeuf
--
Josh
witch tables were tricky to get right on x86. If you share an example
(or even just a .o file) I can take a look. Hopefully they're somewhat
similar to x86 switch tables. Otherwise we may want to consider a
different approach (for example maybe a GCC plugin could help annotate
them).
--
Josh
hich is used elsewhere in other
> tools/ Makefiles).
>
> Link: https://github.com/ClangBuiltLinux/linux/issues/481
> Signed-off-by: Nathan Chancellor
Thanks Nathan. I'll send it along to the tip tree.
--
Josh
From: Raphael Gault
The directive specified in the documentation to add an exception
for a single file in a Makefile was inverted.
Signed-off-by: Raphael Gault
Signed-off-by: Josh Poimboeuf
---
tools/objtool/Documentation/stack-validation.txt | 2 +-
1 file changed, 1 insertion(+), 1
+++ b/tools/objtool/Documentation/stack-validation.txt
> @@ -306,7 +306,7 @@ ignore it:
>
> - To skip validation of a file, add
>
> -OBJECT_FILES_NON_STANDARD_filename.o := n
> +OBJECT_FILES_NON_STANDARD_filename.o := y
>
>to the Makefile.
Thanks Raphael. I will send it along to -tip.
--
Josh
Commit-ID: e6f393bc939d566ce3def71232d8013de9aaadde
Gitweb: https://git.kernel.org/tip/e6f393bc939d566ce3def71232d8013de9aaadde
Author: Josh Poimboeuf
AuthorDate: Mon, 13 May 2019 12:01:32 -0500
Committer: Ingo Molnar
CommitDate: Mon, 13 May 2019 20:31:17 +0200
objtool: Fix function
1101 - 1200 of 5954 matches
Mail list logo