On Sun, Jan 31, 2021 at 3:39 PM Kyle Huey wrote:
>
> On Sun, Jan 31, 2021 at 3:36 PM Andy Lutomirski wrote:
> > > The odd system call tracing part I have no idea who depends on it
> > > (apparently "rr", which I assume is some replay thing), and I suspect
&
> On Jan 31, 2021, at 2:57 PM, Linus Torvalds
> wrote:
>
> On Sun, Jan 31, 2021 at 2:20 PM Andy Lutomirski wrote:
>>
>> A smallish test that we could stick in selftests would be great if that’s
>> straightforward.
>
> Side note: it would be good to
) as
compared to changing noinline to __always_inline in the definition of
mm_fault_error().
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 97 +
1 file changed, 45 insertions(+), 52 deletions(-)
diff --git a/arch
, with or without SMEP, we
should not try to resolve the page fault. This is an error, pure and
simple. Rearrange the code so that we catch this case early, check for
erratum #93, and bail out.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 23
> On Jan 31, 2021, at 2:08 PM, Kyle Huey wrote:
>
> On Sun, Jan 31, 2021 at 2:04 PM Andy Lutomirski wrote:
>> Indeed, and I have tests for this.
>
> Do you mean you already have a test case or that you would like a
> minimized test case?
A smallish test that we
in question was WRUSS.
do_sigbus() should not send SIGBUS for WRUSS -- it should handle it like
any other kernel mode failure.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 15 +++
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git
If we get a SMEP violation or a fault that would have been a SMEP
violation if we had SMEP, we shouldn't run fixups. Just OOPS.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git
> On Jan 31, 2021, at 1:30 PM, Linus Torvalds
> wrote:
>
> Btw Kyle, do you have a good simple test-case for this? Clearly this
> is some weird special case use, and regular single-stepping works
> fine.
>
>
Indeed, and I have tests for this.
TBH, the TIF_SINGLESTEP code makes no sense,
We can drop an indentation level and remove the last user_mode(regs) == true
caller of no_context() by directly OOPSing for implicit kernel faults from
usermode.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 59
ch_tlbbatch_flush()?
>
> Signed-off-by: Nadav Amit
> Cc: Andrea Arcangeli
> Cc: Andrew Morton
> Cc: Andy Lutomirski
> Cc: Dave Hansen
> Cc: Peter Zijlstra
> Cc: Thomas Gleixner
> Cc: Will Deacon
> Cc: Yu Zhao
> Cc: Nick Piggin
> Cc: x...@kernel.org
> ---
>
On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit wrote:
>
> From: Nadav Amit
>
> x86 currently has a TLB-generation tracking logic that can be used by
> additional architectures (as long as they implement some additional
> logic).
>
> Extract the relevant pieces of code from x86 to general TLB code.
Not all callers of no_context() want to run exception fixups.
Separate the OOPS code out from the fixup code in no_context().
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 116 +++-
1 file changed, 62
Cc: Peter Zijlstra
Cc: Alexei Starovoitov
Cc: Daniel Borkmann
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 04cc98ec2423..d39946ad8a91 100644
--- a/arch/x86/mm
If fault_signal_pending() returns true, then the core mm has unlocked the
mm for us. Add a comment to help future readers of this code.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff
signal delivery on CPUs that are not affected by AMD erratum #91.
Andy Lutomirski (11):
x86/fault: Fix AMD erratum #91 errata fixup for user code
x86/fault: Fold mm_fault_error() into do_user_addr_fault()
x86/fault/32: Move is_f00f_bug() do do_kern_addr_fault()
x86/fault: Document the locking
bad_area() and its relatives are called from many places in fault.c, and
exactly one of them wants the F00F workaround.
__bad_area_nosemaphore() no longer contains any kernel fault code, which
prepares for further cleanups.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
;maccess: unify the probe kernel arch hooks")
Cc: sta...@vger.kernel.org
Cc: Dave Hansen
Cc: Peter Zijlstra
Cc: Christoph Hellwig
Cc: Alexei Starovoitov
Cc: Daniel Borkmann
Cc: Masami Hiramatsu
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 31 +--
The name no_context() has never been very clear. It's only called for
faults from kernel mode, so rename it and change the no-longer-useful
user_mode(regs) check to a WARN_ON_ONCE.
Cc: Dave Hansen
Cc: Peter Zijlstra
Signed-off-by: Andy Lutomirski
---
arch/x86/mm/fault.c | 28
On Sat, Jan 30, 2021 at 5:17 PM Nadav Amit wrote:
>
> > On Jan 30, 2021, at 5:07 PM, Andy Lutomirski wrote:
> >
> > Adding Andrew Cooper, who has a distressingly extensive understanding
> > of the x86 PTE magic.
> >
> > On Sat, Jan 30, 2021 at 4:16 PM N
On Sat, Jan 30, 2021 at 5:19 PM Nadav Amit wrote:
>
> > On Jan 30, 2021, at 5:02 PM, Andy Lutomirski wrote:
> >
> > On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit wrote:
> >> From: Nadav Amit
> >>
> >> fullmm in mmu_gather is supposed to indicate
the PTE is reread.
>
> Signed-off-by: Nadav Amit
> Cc: Andrea Arcangeli
> Cc: Andrew Morton
> Cc: Andy Lutomirski
> Cc: Dave Hansen
> Cc: Peter Zijlstra
> Cc: Thomas Gleixner
> Cc: Will Deacon
> Cc: Yu Zhao
> Cc: Nick Piggin
> Cc: x...@kernel.org
> ---
&
On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit wrote:
>
> From: Nadav Amit
>
> fullmm in mmu_gather is supposed to indicate that the mm is torn-down
> (e.g., on process exit) and can therefore allow certain optimizations.
> However, tlb_finish_mmu() sets fullmm, when in fact it want to say that
>
On Sat, Jan 30, 2021 at 4:16 PM Nadav Amit wrote:
>
> From: Nadav Amit
>
> There are currently (at least?) 5 different TLB batching schemes in the
> kernel:
>
> 1. Using mmu_gather (e.g., zap_page_range()).
>
> 2. Using {inc|dec}_tlb_flush_pending() to inform other threads on the
>ongoing
> On Jan 29, 2021, at 11:42 AM, Dave Hansen wrote:
>
> On 1/27/21 1:25 PM, Yu-cheng Yu wrote:
>> +help
>> + Control-flow protection is a hardware security hardening feature
>> + that detects function-return address or jump target changes by
>> + malicious code.
>
> It's
the
32-bit regset, so this patch has no effect on that path.)
Fix it and deobfuscate the code by hardcoding the 64-bit view in the
x32 ptrace() and selecting the view based on the kernel config in
the native ptrace().
Signed-off-by: Andy Lutomirski
---
Every time I look at ptrace, it grosses me
I code by changing paravirt code into native code and left a comment
>> about "inspecting RIP instead". But until now, "inspecting RIP instead"
>> has not been made happened and this patch tries to complete it.
>>
>> Comments in the code was from Andy Lutom
On Sun, Jan 24, 2021 at 5:13 PM Lai Jiangshan wrote:
>
> From: Lai Jiangshan
>
> The commit 929bacec21478("x86/entry/64: De-Xen-ify our NMI code") simplified
> the NMI code by changing paravirt code into native code and left a comment
> about "inspecting RIP instead". But until now, "inspecting
On Fri, Jan 22, 2021 at 11:48 PM Lai Jiangshan wrote:
>
> From: Lai Jiangshan
>
> When X86_BUG_CPU_MELTDOWN & KPTI, cpu_current_top_of_stack lives in the
> TSS which is also in the user CR3 and it becomes a coveted fruit. An
> attacker can fetch the kernel stack top from it and continue next
The following commit has been merged into the x86/cleanups branch of tip:
Commit-ID: 8ece53ef7f428ee3f8eab936268b1a3fe2725e6b
Gitweb:
https://git.kernel.org/tip/8ece53ef7f428ee3f8eab936268b1a3fe2725e6b
Author:Andy Lutomirski
AuthorDate:Tue, 19 Jan 2021 09:40:55 -08:00
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: e45122893a9870813f9bd7b4add4f613e6f29008
Gitweb:
https://git.kernel.org/tip/e45122893a9870813f9bd7b4add4f613e6f29008
Author:Andy Lutomirski
AuthorDate:Wed, 20 Jan 2021 21:09:48 -08:00
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 67de8dca50c027ca0fa3b62a488ee5035036a0da
Gitweb:
https://git.kernel.org/tip/67de8dca50c027ca0fa3b62a488ee5035036a0da
Author:Andy Lutomirski
AuthorDate:Wed, 20 Jan 2021 21:09:49 -08:00
The remaining callers of kernel_fpu_begin() in 64-bit kernels don't use 387
instructions, so there's no need to sanitize the FPU state. Skip it to get
most of the performance we lost back.
Reported-by: Krzysztof Olędzki
Signed-off-by: Andy Lutomirski
---
arch/x86/include/asm/fpu/api.h | 12
.
Fixes: 7ad816762f9b ("x86/fpu: Reset MXCSR to default in kernel_fpu_begin()")
Cc:
Reported-by: Krzysztof Mazur
Signed-off-by: Andy Lutomirski
---
arch/x86/lib/mmx_32.c | 20 +++-
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/x86/lib/mmx_32.c b/ar
before CR4.OSFXSR gets set, and
initializing MXCSR will fail.
- Any future in-kernel users of XFD (eXtended Feature Disable)-capable
dynamic states will need special handling.
Add a more specific API that allows callers specify exactly what they want.
Signed-off-by: Andy Lutomirski
---
arch/x
initializing FCW on 64-bit kernels.
Cc: Arnd Bergmann
Signed-off-by: Andy Lutomirski
---
arch/x86/include/asm/efi.h | 24
arch/x86/platform/efi/efi_64.c | 4 ++--
2 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/efi.h b/arch/x86
improvements (Boris)
Changes from v1:
- Fix MMX better -- MMX really does need FNINIT.
- Improve the EFI code.
- Rename the KFPU constants.
- Changelog improvements.
Andy Lutomirski (4):
x86/fpu: Add kernel_fpu_begin_mask() to selectively initialize state
x86/mmx: Use KFPU_387 for MMX string
On Tue, Jan 19, 2021 at 11:51 PM Krzysztof Olędzki wrote:
>
> On 2021-01-19 at 09:38, Andy Lutomirski wrote:
> > This series fixes two regressions: a boot failure on AMD K7 and a
> > performance regression on everything.
> >
> > I did a double-take here --
> On Jan 20, 2021, at 3:53 AM, Borislav Petkov wrote:
>
> On Tue, Jan 19, 2021 at 09:38:59AM -0800, Andy Lutomirski wrote:
>> Currently, requesting kernel FPU access doesn't distinguish which parts of
>> the extended ("FPU") state are neede
: Peter Xu
Cc: Stas Sergeev
Cc: Brian Gerst
Signed-off-by: Andy Lutomirski
---
Changes from v1:
- Get rid of the whole screen_bitmap and the fault code, too.
arch/x86/include/asm/vm86.h | 1 -
arch/x86/include/uapi/asm/vm86.h | 4 +--
arch/x86/kernel/vm86_32.c| 62
initializing FCW on 64-bit kernels.
Cc: Arnd Bergmann
Signed-off-by: Andy Lutomirski
---
arch/x86/include/asm/efi.h | 24
arch/x86/platform/efi/efi_64.c | 4 ++--
2 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/efi.h b/arch/x86
.
Fixes: 7ad816762f9b ("x86/fpu: Reset MXCSR to default in kernel_fpu_begin()")
Reported-by: Krzysztof Mazur
Signed-off-by: Andy Lutomirski
---
arch/x86/lib/mmx_32.c | 20 +++-
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/x86/lib/mmx_32.c b/arch/x86/
code.
- Rename the KFPU constants.
- Changelog improvements.
Andy Lutomirski (4):
x86/fpu: Add kernel_fpu_begin_mask() to selectively initialize state
x86/mmx: Use KFPU_387 for MMX string operations
x86/fpu: Make the EFI FPU calling convention explicit
x86/fpu/64: Don't FNINIT
before CR4.OSFXSR gets set, and
initializing MXCSR will fail.
- Any future in-kernel users of XFD (eXtended Feature Disable)-capable
dynamic states will need special handling.
This patch adds a more specific API that allows callers specify exactly
what they want.
Signed-off-by: Andy Lutomirski
The remaining callers of kernel_fpu_begin() in 64-bit kernels don't use 387
instructions, so there's no need to sanitize the FPU state. Skip it to get
most of the performance we lost back.
Reported-by: Krzysztof Olędzki
Signed-off-by: Andy Lutomirski
---
arch/x86/include/asm/fpu/api.h | 12
before CR4.OSFXSR gets set, and
initializing MXCSR will fail.
- Any future in-kernel users of XFD (eXtended Feature Disable)-capable
dynamic states will need special handling.
This patch adds a more specific API that allows callers specify exactly
what they want.
Signed-off-by: Andy
and also the rather
slow FNINIT instructions.
Fixes: 7ad816762f9b ("x86/fpu: Reset MXCSR to default in kernel_fpu_begin()")
Reported-by: Krzysztof Mazur
Signed-off-by: Andy Lutomirski
---
arch/x86/lib/mmx_32.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --
The remaining callers of kernel_fpu_begin() in 64-bit kernels don't use 387
instructions, so there's no need to sanitize FCW. Skip it to get the
performance we lost back.
Reported-by: Krzysztof Olędzki
Signed-off-by: Andy Lutomirski
---
arch/x86/include/asm/fpu/api.h | 12
1 file
This series fixes two regressions: a boot failure on AMD K7 and a
performance regression on everything.
I did a double-take here -- the regressions were reported by different
people, both named Krzysztof :)
Andy Lutomirski (4):
x86/fpu: Add kernel_fpu_begin_mask() to selectively initialize
to safely change the default semantics of
kernel_fpu_begin() to stop initializing FCW on 64-bit kernels.
Cc: Arnd Bergmann
Signed-off-by: Andy Lutomirski
---
arch/x86/include/asm/efi.h | 4 ++--
arch/x86/include/asm/fpu/api.h | 7 +++
arch/x86/platform/efi/efi_64.c | 2 +-
3 files
On Fri, Jan 15, 2021 at 1:03 AM Arnd Bergmann wrote:
>
> On Fri, Jan 15, 2021 at 3:06 AM Ryan Houdek wrote:
> > On Wed, Jan 6, 2021 at 12:49 AM Arnd Bergmann wrote:
> >> On Wed, Jan 6, 2021 at 7:48 AM wrote:
> >> > From: Ryan Houdek
> >> ...
> >>
> >> For x86, this has another complication,
On Fri, Jan 15, 2021 at 10:14 AM Marco Faltelli
wrote:
>
> hr_sleep is a new system call engineered for nanosecond time scale
> granularities.
> With respect to nanosleep, it uses a single value representation
> of the sleep period.
> hr_sleep achieves 15x improvement for microsecond scale timers
On Mon, Jan 11, 2021 at 1:45 PM Tony Luck wrote:
>
> Linux can now recover from machine checks where kernel code is
> doing get_user() to access application memory. But there isn't
> a way to distinguish whether get_user() failed because of a page
> fault or a machine check.
>
> Thus there is a
On Thu, Jan 14, 2021 at 6:51 AM Krzysztof Mazur wrote:
>
> On Thu, Jan 14, 2021 at 03:07:37PM +0100, Borislav Petkov wrote:
> > On Thu, Jan 14, 2021 at 01:36:57PM +0100, Krzysztof Mazur wrote:
> > > The OSFXSR must be set only on CPUs with SSE. There
> > > are some CPUs with 3DNow!, but without
> On Jan 12, 2021, at 5:50 PM, Luck, Tony wrote:
>
> On Tue, Jan 12, 2021 at 02:04:55PM -0800, Andy Lutomirski wrote:
>>> But we know that the fault happend in a get_user() or copy_from_user() call
>>> (i.e. an RIP with an extable recovery address). Does con
> On Jan 12, 2021, at 2:29 PM, Nadav Amit wrote:
>
>
>>
>> On Jan 12, 2021, at 1:43 PM, Will Deacon wrote:
>>
>> On Tue, Jan 12, 2021 at 12:38:34PM -0800, Nadav Amit wrote:
On Jan 12, 2021, at 11:56 AM, Yu Zhao wrote:
> On Tue, Jan 12, 2021 at 11:15:43AM -0800, Nadav Amit
> On Jan 12, 2021, at 12:52 PM, Luck, Tony wrote:
>
> On Tue, Jan 12, 2021 at 10:57:07AM -0800, Andy Lutomirski wrote:
>>> On Tue, Jan 12, 2021 at 10:24 AM Luck, Tony wrote:
>>>
>>> On Tue, Jan 12, 2021 at 09:21:21AM -0800, Andy Lutomirski wrote:
>&
On Tue, Jan 12, 2021 at 9:59 AM Sean Christopherson wrote:
>
> On Tue, Jan 12, 2021, Sean Christopherson wrote:
> > On Tue, Jan 12, 2021, Wei Huang wrote:
> > > From: Bandan Das
> > >
> > > While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> > > CPUs check EAX against
On Tue, Jan 12, 2021 at 10:24 AM Luck, Tony wrote:
>
> On Tue, Jan 12, 2021 at 09:21:21AM -0800, Andy Lutomirski wrote:
> > Well, we need to do *something* when the first __get_user() trips the
> > #MC. It would be nice if we could actually fix up the page tables
> >
On Tue, Jan 12, 2021 at 9:16 AM Luck, Tony wrote:
>
> On Tue, Jan 12, 2021 at 09:00:14AM -0800, Andy Lutomirski wrote:
> > > On Jan 11, 2021, at 2:21 PM, Luck, Tony wrote:
> > >
> > > On Mon, Jan 11, 2021 at 02:11:56PM -0800, Andy Lutomirski wrote:
> >
On Tue, Jan 12, 2021 at 9:02 AM Metzger, Markus T
wrote:
>
> > [ 26.990644] getreg: gs_base = 0xf7f8e000
> > [ 26.991694] getreg: GS=0x63, GSBASE=0xf7f8e000
> > [ 26.993117] PTRACE_SETREGS
> > [ 26.993813] putreg: change gsbase from 0xf7f8e000 to 0x0
> > [ 26.995134] putreg: write
On Mon, Jan 11, 2021 at 2:18 PM Linus Torvalds
wrote:
>
> On Mon, Jan 11, 2021 at 11:19 AM Linus Torvalds
> wrote:
> Actually, what I think might be a better model is to actually
> strengthen the rules even more, and get rid of GUP_PIN_COUNTING_BIAS
> entirely.
>
> What we could do is just make
> On Jan 11, 2021, at 2:21 PM, Luck, Tony wrote:
>
> On Mon, Jan 11, 2021 at 02:11:56PM -0800, Andy Lutomirski wrote:
>>
>>>> On Jan 11, 2021, at 1:45 PM, Tony Luck wrote:
>>>
>>> Recovery action when get_user() triggers a machine check uses the fix
On Tue, Jan 12, 2021 at 3:39 AM Metzger, Markus T
wrote:
>
> > The GDB behavior looks to be different between the two cases -- with vs
> > without gdb server, when I checked the GS/GSBASE values on the ptrace front.
>
> 64-bit GDB doesn't support FSGSBASE for 32-bit inferiors and it looks like
>
> On Jan 12, 2021, at 7:46 AM, Bandan Das wrote:
>
> Andy Lutomirski writes:
> ...
>>>>>> #endif diff --git a/arch/x86/kvm/mmu/mmu.c
>>>>>> b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..c5c4aaf01a1a 100644
>>>>>> --- a/arch/x8
> On Jan 12, 2021, at 7:17 AM, Maxim Levitsky wrote:
>
> On Tue, 2021-01-12 at 07:11 -0800, Andy Lutomirski wrote:
>>>> On Jan 12, 2021, at 4:15 AM, Vitaly Kuznetsov wrote:
>>>
>>> Wei Huang writes:
>>>
>>>> From: Bandan
> On Jan 12, 2021, at 4:15 AM, Vitaly Kuznetsov wrote:
>
> Wei Huang writes:
>
>> From: Bandan Das
>>
>> While running VM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
>> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
>> before checking VMCB's instruction
The following commit has been merged into the x86/misc branch of tip:
Commit-ID: 9297e602adf8d5587d83941c48e4dbae46c8df5f
Gitweb:
https://git.kernel.org/tip/9297e602adf8d5587d83941c48e4dbae46c8df5f
Author:Andy Lutomirski
AuthorDate:Mon, 02 Nov 2020 11:54:02 -08:00
On Mon, Jan 11, 2021 at 3:52 PM Tom de Vries wrote:
>
> On 1/12/21 12:40 AM, Andy Lutomirski wrote:
> > On Mon, Jan 11, 2021 at 1:06 PM Andy Lutomirski wrote:
> >>
> >>
> >>> On Jan 11, 2021, at 12:00 PM, Borislav Petkov wrote:
> >>>
> &
On Mon, Jan 11, 2021 at 1:06 PM Andy Lutomirski wrote:
>
>
> > On Jan 11, 2021, at 12:00 PM, Borislav Petkov wrote:
> >
>
>
> > Or do you mean I should add "unsafe_fsgsbase" to grub cmdline and bisect
> > with fsgsbase enabled in all test kernels
> On Jan 11, 2021, at 1:45 PM, Tony Luck wrote:
>
> Recovery action when get_user() triggers a machine check uses the fixup
> path to make get_user() return -EFAULT. Also queue_task_work() sets up
> so that kill_me_maybe() will be called on return to user mode to send a
> SIGBUS to the
> On Jan 11, 2021, at 12:00 PM, Borislav Petkov wrote:
>
> Or do you mean I should add "unsafe_fsgsbase" to grub cmdline and bisect
> with fsgsbase enabled in all test kernels?
Yes. But I can also look myself in a bit.
k canary in %gs...
Hmm. Can you try booting with unsafe_fsgsbase and bisecting further?
And maybe send me your test binary? I tried to reproduce this, but it
worked fine, even if I compile the test program with
-fstack-protector-all.
Off the top of my head, I would have expected this to f
> On Jan 9, 2021, at 12:17 PM, ebied...@xmission.com wrote:
>
> Andy Lutomirski writes:
>
>> The implementation was rather buggy. It unconditionally marked PTEs
>> read-only, even for VM_SHARED mappings. I'm not sure whether this is
>> actually a problem,
> On Jan 8, 2021, at 3:34 PM, Andrea Arcangeli wrote:
>
> On Fri, Jan 08, 2021 at 10:31:24AM -0800, Andy Lutomirski wrote:
>> Can we just remove vmsplice() support? We could make it do a normal
>
>> copy, thereby getting rid of a fair amount of nastiness and potential
&g
: Peter Xu
Signed-off-by: Andy Lutomirski
---
arch/x86/include/uapi/asm/vm86.h | 2 +-
arch/x86/kernel/vm86_32.c| 55 ++--
2 files changed, 10 insertions(+), 47 deletions(-)
diff --git a/arch/x86/include/uapi/asm/vm86.h b/arch/x86/include/uapi/asm/vm86.h
index
On Fri, Jan 8, 2021 at 10:19 AM Jason Gunthorpe wrote:
>
> On Fri, Jan 08, 2021 at 12:00:36PM -0500, Andrea Arcangeli wrote:
> > > The majority cannot be converted to notifiers because they are DMA
> > > based. Every one of those is an ABI for something, and does not expect
> > > extra privilege
On Tue, Jan 5, 2021 at 1:53 AM David Laight wrote:
>
> From: Andy Lutomirski
> > Sent: 04 January 2021 23:04
> ...
> > >> The x32 system calls have their own system call table and it would be
> > >> trivial to set a flag like TS_COMPAT when looking up a
> On Jan 5, 2021, at 5:26 AM, Will Deacon wrote:
>
> Hi Andy,
>
> Sorry for the slow reply, I was socially distanced from my keyboard.
>
>> On Mon, Dec 28, 2020 at 04:36:11PM -0800, Andy Lutomirski wrote:
>> On Mon, Dec 28, 2020 at 4:11 PM Nicholas Piggin w
> On Jan 4, 2021, at 2:36 PM, David Laight wrote:
>
> From: Eric W. Biederman
>> Sent: 04 January 2021 20:41
>>
>> Al Viro writes:
>>
>>> On Mon, Jan 04, 2021 at 12:16:56PM +, David Laight wrote:
On x86 in_compat_syscall() is defined as:
in_ia32_syscall() ||
On Mon, Dec 28, 2020 at 4:36 PM Nicholas Piggin wrote:
>
> Excerpts from Andy Lutomirski's message of December 29, 2020 7:06 am:
> > On Mon, Dec 28, 2020 at 12:32 PM Mathieu Desnoyers
> > wrote:
> >>
> >> ----- On Dec 28, 2020, at 2:44 PM, A
On Mon, Dec 28, 2020 at 4:11 PM Nicholas Piggin wrote:
>
> Excerpts from Andy Lutomirski's message of December 28, 2020 4:28 am:
> > The old sync_core_before_usermode() comments said that a non-icache-syncing
> > return-to-usermode instruction is x86-specific and that all other
> > architectures
On Mon, Dec 28, 2020 at 1:09 PM Mathieu Desnoyers
wrote:
>
> - On Dec 27, 2020, at 4:36 PM, Andy Lutomirski l...@kernel.org wrote:
>
> [...]
>
> >> You seem to have noticed odd cases on arm64 where this guarantee does not
> >> match reality. Where exa
On Mon, Dec 28, 2020 at 11:09 AM Russell King - ARM Linux admin
wrote:
>
> On Mon, Dec 28, 2020 at 07:29:34PM +0100, Jann Horn wrote:
> > After chatting with rmk about this (but without claiming that any of
> > this is his opinion), based on the manpage, I think membarrier()
> > currently doesn't
On Mon, Dec 28, 2020 at 12:32 PM Mathieu Desnoyers
wrote:
>
> - On Dec 28, 2020, at 2:44 PM, Andy Lutomirski l...@kernel.org wrote:
>
> > On Mon, Dec 28, 2020 at 11:09 AM Russell King - ARM Linux admin
> > wrote:
> >>
> >> On Mon, Dec 28, 20
On Mon, Dec 28, 2020 at 10:30 AM Jann Horn wrote:
>
> On Mon, Dec 28, 2020 at 6:14 PM Andy Lutomirski wrote:
> > On Mon, Dec 28, 2020 at 2:25 AM Russell King - ARM Linux admin
> > wrote:
> > >
> > > On Sun, Dec 27, 2020 at 01:36:13PM -0800, Andy Lutomirski
On Mon, Dec 28, 2020 at 9:23 AM Russell King - ARM Linux admin
wrote:
>
> On Mon, Dec 28, 2020 at 09:14:23AM -0800, Andy Lutomirski wrote:
> > On Mon, Dec 28, 2020 at 2:25 AM Russell King - ARM Linux admin
> > wrote:
> > >
> > > On Sun, Dec 27, 2020 at 01:
On Mon, Dec 28, 2020 at 2:25 AM Russell King - ARM Linux admin
wrote:
>
> On Sun, Dec 27, 2020 at 01:36:13PM -0800, Andy Lutomirski wrote:
> > On Sun, Dec 27, 2020 at 12:18 PM Mathieu Desnoyers
> > wrote:
> > >
> > > ----- On Dec 27, 2020, at 1:28 PM, A
On Sun, Dec 27, 2020 at 12:18 PM Mathieu Desnoyers
wrote:
>
> - On Dec 27, 2020, at 1:28 PM, Andy Lutomirski l...@kernel.org wrote:
>
> >
> > I admit that I'm rather surprised that the code worked at all on arm64,
> > and I'm suspicious that it has never been very
r: Provide core serializing command,
*_SYNC_CORE")
Signed-off-by: Andy Lutomirski
---
Hi arm64 and powerpc people-
This is part of a series here:
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/log/?h=x86/fixes
Before I send out the whole series, I'm hoping that some arm64 an
> On Dec 23, 2020, at 2:29 PM, Yu Zhao wrote:
>
>
> I was hesitant to suggest the following because it isn't that straight
> forward. But since you seem to be less concerned with the complexity,
> I'll just bring it on the table -- it would take care of both ufd and
> clear_refs_write,
On Sat, Dec 19, 2020 at 2:06 PM Nadav Amit wrote:
> > [ I have in mind another solution, such as keeping in each page-table a
> > “table-generation” which is the mm-generation at the time of the change,
> > and only flush if “table-generation”==“mm-generation”, but it requires
> > some thought on
On Mon, Dec 21, 2020 at 8:16 PM Linus Torvalds
wrote:
>
> On Mon, Dec 21, 2020 at 7:19 PM Andy Lutomirski wrote:
> >
> > Ugh, this is unpleasantly complicated.
>
> What I *should* have said is that *because* userfaultfd is changing
> the VM layout, it should alwa
On Mon, Dec 21, 2020 at 3:22 PM Linus Torvalds
wrote:
>
> On Mon, Dec 21, 2020 at 2:30 PM Peter Xu wrote:
> >
> > AFAIU mprotect() is the only one who modifies the pte using the mmap write
> > lock. NUMA balancing is also using read mmap lock when changing pte
> > protections, while my
On Mon, Dec 21, 2020 at 10:04 AM Andrea Arcangeli wrote:
>
> Hello,
>
> On Sat, Dec 19, 2020 at 09:08:55PM -0800, Andy Lutomirski wrote:
> > On Sat, Dec 19, 2020 at 6:49 PM Andrea Arcangeli
> > wrote:
> > > The ptes are changed always with the PT lock,
On Sat, Dec 19, 2020 at 6:49 PM Andrea Arcangeli wrote:
>
> On Sat, Dec 19, 2020 at 06:01:39PM -0800, Andy Lutomirski wrote:
> > I missed the beginning of this thread, but it looks to me like
> > userfaultfd changes PTEs with not locking except mmap_read_lock(). It
>
> T
On Sat, Dec 19, 2020 at 1:34 PM Nadav Amit wrote:
>
> [ cc’ing some more people who have experience with similar problems ]
>
> > On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli wrote:
> >
> > Hello,
> >
> > On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote:
> >> Analyzing this problem
On Wed, Dec 16, 2020 at 9:46 AM Chang S. Bae wrote:
>
> Key Locker [1][2] is a new security feature available in new Intel CPUs to
> protect data encryption keys for the Advanced Encryption Standard
> algorithm. The protection limits the amount of time an AES key is exposed
> in memory by sealing
On Wed, Dec 16, 2020 at 9:46 AM Chang S. Bae wrote:
>
> Key Locker (KL) is Intel's new security feature that protects the AES key
> at the time of data transformation. New AES SIMD instructions -- as a
> successor of Intel's AES-NI -- are provided to encode an AES key and
> reference it for the
> On Dec 17, 2020, at 5:19 AM, Peter Zijlstra wrote:
>
> On Thu, Dec 17, 2020 at 02:07:01PM +0100, Thomas Gleixner wrote:
>>> On Fri, Dec 11 2020 at 14:14, Andy Lutomirski wrote:
>>>> On Mon, Nov 23, 2020 at 10:10 PM wrote:
>>> After contemplating th
On Tue, Dec 15, 2020 at 5:32 PM Ira Weiny wrote:
>
> On Fri, Dec 11, 2020 at 02:14:28PM -0800, Andy Lutomirski wrote:
> > On Mon, Nov 23, 2020 at 10:10 PM wrote:
> > IOW we have:
> >
> > struct extended_pt_regs {
> > bool rcu_whatever;
> >
201 - 300 of 19466 matches
Mail list logo