tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti
head: 448c49e07e52076586e9e706212298d865ad7a27
commit: 8b65ec93225592fbc35ab8107fd880e505aae1ef [22/54] x86/cpu_entry_area:
Move it out of fixmap
config: i386-randconfig-x016-201751 (attached as .config)
compiler:
tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti
head: 448c49e07e52076586e9e706212298d865ad7a27
commit: 8b65ec93225592fbc35ab8107fd880e505aae1ef [22/54] x86/cpu_entry_area:
Move it out of fixmap
config: i386-randconfig-x016-201751 (attached as .config)
compiler:
On Mon, Nov 27, 2017 at 12:42 AM, Wu Hao wrote:
Hi Hao,
> +
> +enum port_feature_id {
> + PORT_FEATURE_ID_HEADER = 0x0,
> + PORT_FEATURE_ID_ERROR = 0x1,
> + PORT_FEATURE_ID_UMSG = 0x2,
> + PORT_FEATURE_ID_PR = 0x3,
> + PORT_FEATURE_ID_STP = 0x4,
>
On Mon, Nov 27, 2017 at 12:42 AM, Wu Hao wrote:
Hi Hao,
> +
> +enum port_feature_id {
> + PORT_FEATURE_ID_HEADER = 0x0,
> + PORT_FEATURE_ID_ERROR = 0x1,
> + PORT_FEATURE_ID_UMSG = 0x2,
> + PORT_FEATURE_ID_PR = 0x3,
> + PORT_FEATURE_ID_STP = 0x4,
> +
On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
wrote:
> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
>> On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
>> > On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
>> > > I don't know what the
On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
wrote:
> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
>> On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
>> > On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
>> > > I don't know what the right interface is, but my laptop
From: Eric Biggers
pcrypt is using the old way of freeing instances, where the ->free()
method specified in the 'struct crypto_template' is passed a pointer to
the 'struct crypto_instance'. But the crypto_instance is being
kfree()'d directly, which is incorrect because the
From: Eric Biggers
pcrypt is using the old way of freeing instances, where the ->free()
method specified in the 'struct crypto_template' is passed a pointer to
the 'struct crypto_instance'. But the crypto_instance is being
kfree()'d directly, which is incorrect because the memory was actually
On Wed, 20 Dec 2017, Thomas Gleixner wrote:
> +++ b/arch/x86/mm/cpu_entry_area.c
> @@ -0,0 +1,102 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
Lacks an
#include as 0-day noticed
> +#include
> +#include
> +#include
> +#include
> +#include
Thanks,
tglx
On Wed, 20 Dec 2017, Thomas Gleixner wrote:
> +++ b/arch/x86/mm/cpu_entry_area.c
> @@ -0,0 +1,102 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
Lacks an
#include as 0-day noticed
> +#include
> +#include
> +#include
> +#include
> +#include
Thanks,
tglx
On 12/19/2017 11:36 AM, Peter Zijlstra wrote:
On Fri, Dec 08, 2017 at 12:07:54PM -0800, subhra mazumdar wrote:
+static inline void
+sd_context_switch(struct sched_domain *sd, struct rq *rq, int util)
+{
+ struct sched_group *sg_cpu;
+
+ /* atomically add/subtract the util */
+
On 12/19/2017 11:36 AM, Peter Zijlstra wrote:
On Fri, Dec 08, 2017 at 12:07:54PM -0800, subhra mazumdar wrote:
+static inline void
+sd_context_switch(struct sched_domain *sd, struct rq *rq, int util)
+{
+ struct sched_group *sg_cpu;
+
+ /* atomically add/subtract the util */
+
Hi Linus
I'm working on gpio for an AMD Family 16h Model 30h system[1]. The SoC is the
same as the GX412-TC used in the PC Engines APU2.
There is an out-of-tree gpio driver (gpio-amd) for this SoC in the meta-amd
yocto layer[2].
Another driver (gpio-sb8xx) was submitted for upstream
Hi Linus
I'm working on gpio for an AMD Family 16h Model 30h system[1]. The SoC is the
same as the GX412-TC used in the PC Engines APU2.
There is an out-of-tree gpio driver (gpio-amd) for this SoC in the meta-amd
yocto layer[2].
Another driver (gpio-sb8xx) was submitted for upstream
On 21.12.2017 01:02, Thierry Reding wrote:
> On Thu, Dec 21, 2017 at 12:05:40AM +0300, Dmitry Osipenko wrote:
>> On 20.12.2017 23:16, Thierry Reding wrote:
>>> On Wed, Dec 20, 2017 at 11:01:49PM +0300, Dmitry Osipenko wrote:
On 20.12.2017 21:01, Thierry Reding wrote:
> On Wed, Dec 20,
On 21.12.2017 01:02, Thierry Reding wrote:
> On Thu, Dec 21, 2017 at 12:05:40AM +0300, Dmitry Osipenko wrote:
>> On 20.12.2017 23:16, Thierry Reding wrote:
>>> On Wed, Dec 20, 2017 at 11:01:49PM +0300, Dmitry Osipenko wrote:
On 20.12.2017 21:01, Thierry Reding wrote:
> On Wed, Dec 20,
tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti
head: 448c49e07e52076586e9e706212298d865ad7a27
commit: f443b8fc21e63b63b3064974c27ab78cbcb39c07 [21/54] x86/cpu_entry_area:
Move it to a separate unit
config: x86_64-randconfig-x011-201751 (attached as .config)
tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti
head: 448c49e07e52076586e9e706212298d865ad7a27
commit: f443b8fc21e63b63b3064974c27ab78cbcb39c07 [21/54] x86/cpu_entry_area:
Move it to a separate unit
config: x86_64-randconfig-x011-201751 (attached as .config)
On Tue, Dec 19, 2017 at 05:11:38PM -0800, Dan Williams wrote:
> On Fri, Nov 10, 2017 at 1:08 AM, Christoph Hellwig wrote:
> >> + struct {
> >> + /*
> >> + * ZONE_DEVICE pages are never on an lru or handled
> >> by
> >> +
On Tue, Dec 19, 2017 at 05:11:38PM -0800, Dan Williams wrote:
> On Fri, Nov 10, 2017 at 1:08 AM, Christoph Hellwig wrote:
> >> + struct {
> >> + /*
> >> + * ZONE_DEVICE pages are never on an lru or handled
> >> by
> >> + *
This patch adds a new sysfs group, namely health, via:
/sys/devices/soc/X.ufshc/health/
This directory contains the below entries, each of which shows an 8-bytes
hex number representing different meanings defined by JEDEC specfication.
Users can simply read these entries to check how their
This patch adds a new sysfs group, namely health, via:
/sys/devices/soc/X.ufshc/health/
This directory contains the below entries, each of which shows an 8-bytes
hex number representing different meanings defined by JEDEC specfication.
Users can simply read these entries to check how their
The address hints are a trainwreck. The array entry numbers have to kept
magically in sync with the actual hints, which is doomed as some of the
array members are initialized at runtime via the entry numbers.
Designated initializers have been around before this code was
implemented
Use the
The address hints are a trainwreck. The array entry numbers have to kept
magically in sync with the actual hints, which is doomed as some of the
array members are initialized at runtime via the entry numbers.
Designated initializers have been around before this code was
implemented
Use the
From: Andy Lutomirski
The kernel is very erratic as to which pagetables have _PAGE_USER set. The
vsyscall page gets lucky: it seems that all of the relevant pagetables are
among the apparently arbitrary ones that set _PAGE_USER. Rather than
relying on chance, just explicitly
From: Andy Lutomirski
The kernel is very erratic as to which pagetables have _PAGE_USER set. The
vsyscall page gets lucky: it seems that all of the relevant pagetables are
among the apparently arbitrary ones that set _PAGE_USER. Rather than
relying on chance, just explicitly set _PAGE_USER.
Changes since V163:
- Moved the cpu entry area out of the fixmap because that caused failures
due to fixmap size and cleanup_highmap() zapping fixmap PTEs.
- Moved all cpu entry area related code into separate files. The
hodgepodge in cpu/common.c was really not appropriate.
-
Changes since V163:
- Moved the cpu entry area out of the fixmap because that caused failures
due to fixmap size and cleanup_highmap() zapping fixmap PTEs.
- Moved all cpu entry area related code into separate files. The
hodgepodge in cpu/common.c was really not appropriate.
-
From: Peter Zijlstra
The LDT is duplicated on fork() and on exec(), which is wrong as exec()
should start from a clean state, i.e. without LDT. To fix this the LDT
duplication code will be moved into arch_dup_mmap() which is only called
for fork().
This introduces a
From: Peter Zijlstra
The LDT is duplicated on fork() and on exec(), which is wrong as exec()
should start from a clean state, i.e. without LDT. To fix this the LDT
duplication code will be moved into arch_dup_mmap() which is only called
for fork().
This introduces a locking problem.
From: Andy Lutomirski
The old docs had the vsyscall range wrong* and were missing the fixmap.
Fix both.
There used to be 8 MB reserved for future vsyscalls, but that's long gone.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
From: Thomas Gleixner
The check for a present page in printk_prot():
if (!pgprot_val(prot)) {
/* Not present */
is bogus. If a PTE is set to PAGE_NONE then the pgprot_val is not zero and
the entry is decoded in bogus ways, e.g. as RX GLB. That is
From: Thomas Gleixner
The check for a present page in printk_prot():
if (!pgprot_val(prot)) {
/* Not present */
is bogus. If a PTE is set to PAGE_NONE then the pgprot_val is not zero and
the entry is decoded in bogus ways, e.g. as RX GLB. That is confusing when
analyzing
From: Andy Lutomirski
The old docs had the vsyscall range wrong* and were missing the fixmap.
Fix both.
There used to be 8 MB reserved for future vsyscalls, but that's long gone.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
Cc: Kees Cook
Cc: Peter Zijlstra
Cc: Brian Gerst
From: Dave Hansen
If the kernel oopses while on the trampoline stack, it will print
"" even if SYSENTER is not involved. That is rather confusing.
The "SYSENTER" stack is used for a lot more than SYSENTER now. Give it a
better string to display in stack dumps, and
From: Dave Hansen
If the kernel oopses while on the trampoline stack, it will print
"" even if SYSENTER is not involved. That is rather confusing.
The "SYSENTER" stack is used for a lot more than SYSENTER now. Give it a
better string to display in stack dumps, and rename the kernel code to
From: Peter Zijlstra
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc:
From: Peter Zijlstra
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Dave Hansen
Cc: David Laight
Cc: Denys Vlasenko
Cc: Eduardo Valentin
Cc: Greg KH
Cc: H.
From: Andy Lutomirski
If something goes wrong with pagetable setup, vsyscall=native will
accidentally fall back to emulation. Make it warn and fail so that we
notice.
Signed-off-by: Andy Lutomirski
Signed-off-by: Ingo Molnar
Signed-off-by:
From: Andy Lutomirski
If something goes wrong with pagetable setup, vsyscall=native will
accidentally fall back to emulation. Make it warn and fail so that we
notice.
Signed-off-by: Andy Lutomirski
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Borislav Petkov
Cc: Brian
From: Peter Zijlstra
Commit: ec400ddeff20 ("x86/microcode_intel_early.c: Early update ucode on
Intel's CPU") grubbed into tlbflush internals without coherent explanation.
Since it says its precaution and the SDM doesn't mention anything like
this, take it out back.
From: Peter Zijlstra
Commit: ec400ddeff20 ("x86/microcode_intel_early.c: Early update ucode on
Intel's CPU") grubbed into tlbflush internals without coherent explanation.
Since it says its precaution and the SDM doesn't mention anything like
this, take it out back.
Signed-off-by: Peter
From: Peter Zijlstra
Per popular request..
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
From: Peter Zijlstra
Per popular request..
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Dave Hansen
Cc: David Laight
Cc: Denys Vlasenko
Cc: Eduardo
The recent cpu_entry_area changes fail to compile on 32bit when BIGSMP=y
and NR_CPUS=512 because the fixmap area becomes too big.
Limit the number of CPUs with BIGSMP to 64, which is already way to big for
32bit, but it's at least a working limitation.
Signed-off-by: Thomas Gleixner
The recent cpu_entry_area changes fail to compile on 32bit when BIGSMP=y
and NR_CPUS=512 because the fixmap area becomes too big.
Limit the number of CPUs with BIGSMP to 64, which is already way to big for
32bit, but it's at least a working limitation.
Signed-off-by: Thomas Gleixner
---
For flushing the TLB, the ASID which has been programmed into the hardware
must be known. That differs from what is in 'cpu_tlbstate'.
Add functions to transform the 'cpu_tlbstate' values into to the one
programmed into the hardware (CR3).
It's not easy to include mmu_context.h into tlbflush.h,
For flushing the TLB, the ASID which has been programmed into the hardware
must be known. That differs from what is in 'cpu_tlbstate'.
Add functions to transform the 'cpu_tlbstate' values into to the one
programmed into the hardware (CR3).
It's not easy to include mmu_context.h into tlbflush.h,
Separate the cpu_entry_area code out of cpu/common.c and the fixmap.
Signed-off-by: Thomas Gleixner
---
arch/x86/include/asm/cpu_entry_area.h | 52 +
arch/x86/include/asm/fixmap.h | 41 -
arch/x86/kernel/cpu/common.c | 94
Separate the cpu_entry_area code out of cpu/common.c and the fixmap.
Signed-off-by: Thomas Gleixner
---
arch/x86/include/asm/cpu_entry_area.h | 52 +
arch/x86/include/asm/fixmap.h | 41 -
arch/x86/kernel/cpu/common.c | 94
From: Thomas Gleixner
Many x86 CPUs leak information to user space due to missing isolation of
user space and kernel space page tables. There are many well documented
ways to exploit that.
The upcoming software migitation of isolating the user and kernel space
page tables
From: Thomas Gleixner
Many x86 CPUs leak information to user space due to missing isolation of
user space and kernel space page tables. There are many well documented
ways to exploit that.
The upcoming software migitation of isolating the user and kernel space
page tables needs a misfeature
From: Thomas Gleixner
The LDT is inheritet independent of fork or exec, but that makes no sense
at all because exec is supposed to start the process clean.
The reason why this happens is that init_new_context_ldt() is called from
init_new_context() which obviously needs to
Put the cpu_entry_area into a separate p4d entry. The fixmap gets too bug
and 0-day already hit a case where the fixmap ptes were cleared by
cleanup_highmap().
Aside of that the fixmap API is a pain as it's all backwards.
Signed-off-by: Thomas Gleixner
---
From: Thomas Gleixner
The LDT is inheritet independent of fork or exec, but that makes no sense
at all because exec is supposed to start the process clean.
The reason why this happens is that init_new_context_ldt() is called from
init_new_context() which obviously needs to be called for both
Put the cpu_entry_area into a separate p4d entry. The fixmap gets too bug
and 0-day already hit a case where the fixmap ptes were cleared by
cleanup_highmap().
Aside of that the fixmap API is a pain as it's all backwards.
Signed-off-by: Thomas Gleixner
---
Documentation/x86/x86_64/mm.txt
From: Dave Hansen
Global pages stay in the TLB across context switches. Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.
But, even having these entries
From: Dave Hansen
Global pages stay in the TLB across context switches. Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.
But, even having these entries in the TLB opens up something
> On Dec 20, 2017, at 1:25 PM, Peter Zijlstra wrote:
>
> On Wed, Dec 20, 2017 at 06:10:11PM +, Song Liu wrote:
>> I think there is one more thing to change:
>
> OK, folded that too; it should all be at:
>
>
> On Dec 20, 2017, at 1:25 PM, Peter Zijlstra wrote:
>
> On Wed, Dec 20, 2017 at 06:10:11PM +, Song Liu wrote:
>> I think there is one more thing to change:
>
> OK, folded that too; it should all be at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/core
>
> Can
From: Thomas Gleixner
Add the initial files for kernel page table isolation, with a minimal init
function and the boot time detection for this misfeature.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
Reviewed-by: Borislav
From: Thomas Gleixner
Add the initial files for kernel page table isolation, with a minimal init
function and the boot time detection for this misfeature.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
Reviewed-by: Borislav Petkov
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc:
From: Dave Hansen
PAGE_TABLE_ISOLATION needs to switch to a different CR3 value when it
enters the kernel and switch back when it exits. This essentially needs to
be done before leaving assembly code.
This is extra challenging because the switching context is
From: Dave Hansen
PAGE_TABLE_ISOLATION needs to switch to a different CR3 value when it
enters the kernel and switch back when it exits. This essentially needs to
be done before leaving assembly code.
This is extra challenging because the switching context is tricky: the
registers that can be
From: Peter Zijlstra
atomic64_inc_return() already implies smp_mb() before and after.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Andy Lutomirski
From: Peter Zijlstra
atomic64_inc_return() already implies smp_mb() before and after.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc: Borislav Petkov
Cc: Brian Gerst
Cc: Dave Hansen
Cc: David
From: Dave Hansen
In clone_pgd_range() copy the init user PGDs which cover the kernel half of
the address space, so a process has all the required kernel mappings
visible.
[ tglx: Split out from the big kaiser dump and folded Andys simplification ]
Signed-off-by:
From: Dave Hansen
In clone_pgd_range() copy the init user PGDs which cover the kernel half of
the address space, so a process has all the required kernel mappings
visible.
[ tglx: Split out from the big kaiser dump and folded Andys simplification ]
Signed-off-by: Dave Hansen
Signed-off-by:
On 12/20/2017 4:41 PM, Andi Kleen wrote:
On Wed, Dec 20, 2017 at 11:42:51AM -0800, kan.li...@linux.intel.com wrote:
From: Kan Liang
The userspace RDPMC usage never works for large PEBS since the large
PEBS is introduced by
commit b8241d20699e ("perf/x86/intel:
On 12/20/2017 4:41 PM, Andi Kleen wrote:
On Wed, Dec 20, 2017 at 11:42:51AM -0800, kan.li...@linux.intel.com wrote:
From: Kan Liang
The userspace RDPMC usage never works for large PEBS since the large
PEBS is introduced by
commit b8241d20699e ("perf/x86/intel: Implement batched PEBS
From: Andy Lutomirski
Provide infrastructure to:
- find a kernel PMD for a mapping which must be visible to user space for
the entry/exit code to work.
- walk an address range and share the kernel PMD with it.
This reuses a small part of the original KAISER patches to
From: Andy Lutomirski
Provide infrastructure to:
- find a kernel PMD for a mapping which must be visible to user space for
the entry/exit code to work.
- walk an address range and share the kernel PMD with it.
This reuses a small part of the original KAISER patches to populate the
user
From: Thomas Gleixner
Force the entry through the trampoline only when PTI is active. Otherwise
go through the normal entry code.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
Reviewed-by: Borislav Petkov
Cc:
From: Dave Hansen
There are effectively two ASID types:
1. The one stored in the mmu_context that goes from 0..5
2. The one programmed into the hardware that goes from 1..6
This consolidates the locations where converting between the two (by doing
a +1) to a
From: Thomas Gleixner
Force the entry through the trampoline only when PTI is active. Otherwise
go through the normal entry code.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
Reviewed-by: Borislav Petkov
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc: Brian Gerst
Cc: Dave Hansen
From: Dave Hansen
There are effectively two ASID types:
1. The one stored in the mmu_context that goes from 0..5
2. The one programmed into the hardware that goes from 1..6
This consolidates the locations where converting between the two (by doing
a +1) to a single place which gives us a
From: Thomas Gleixner
The (irq)entry text must be visible in the user space page tables. To allow
simple PMD based sharing, make the entry text PMD aligned.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
Cc: Andy Lutomirski
From: Thomas Gleixner
The (irq)entry text must be visible in the user space page tables. To allow
simple PMD based sharing, make the entry text PMD aligned.
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
Cc: Andy Lutomirski
Cc: Boris Ostrovsky
Cc: Borislav Petkov
Cc: Brian Gerst
From: Thomas Gleixner
Share the entry text PMD of the kernel mapping with the user space
mapping. If large pages are enabled this is a single PMD entry and at the
point where it is copied into the user page table the RW bit has not been
cleared yet. Clear it right away so the
From: Thomas Gleixner
Share the entry text PMD of the kernel mapping with the user space
mapping. If large pages are enabled this is a single PMD entry and at the
point where it is copied into the user page table the RW bit has not been
cleared yet. Clear it right away so the user space visible
From: Andy Lutomirski
Map the ESPFIX pages into user space when PTI is enabled.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
Cc: Kees Cook
Cc: Peter Zijlstra
Cc: Brian Gerst
From: Dave Hansen
First, it's nice to remove the magic numbers.
Second, PAGE_TABLE_ISOLATION is going to consume half of the available ASID
space. The space is currently unused, but add a comment to spell out this
new restriction.
Signed-off-by: Dave Hansen
On Wed, 20 Dec 2017, Thomas Gleixner wrote:
> From: Vlastimil Babka
>
> CONFIG_PAGE_TABLE_ISOLATION is relatively new and intrusive feature that may
> still have some corner cases which could take some time to manifest and be
> fixed. It would be useful to have Oops messages
From: Andy Lutomirski
Map the ESPFIX pages into user space when PTI is enabled.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
Cc: Kees Cook
Cc: Peter Zijlstra
Cc: Brian Gerst
Cc: David Laight
Cc: Borislav Petkov
---
arch/x86/mm/pti.c | 11 +++
1 file changed,
From: Dave Hansen
First, it's nice to remove the magic numbers.
Second, PAGE_TABLE_ISOLATION is going to consume half of the available ASID
space. The space is currently unused, but add a comment to spell out this
new restriction.
Signed-off-by: Dave Hansen
Signed-off-by: Thomas Gleixner
On Wed, 20 Dec 2017, Thomas Gleixner wrote:
> From: Vlastimil Babka
>
> CONFIG_PAGE_TABLE_ISOLATION is relatively new and intrusive feature that may
> still have some corner cases which could take some time to manifest and be
> fixed. It would be useful to have Oops messages indicate whether it
From: Thomas Gleixner
The Intel PEBS/BTS debug store is a design trainwreck as it expects virtual
addresses which must be visible in any execution context.
So it is required to make these mappings visible to user space when kernel
page table isolation is active.
Provide
From: Thomas Gleixner
The Intel PEBS/BTS debug store is a design trainwreck as it expects virtual
addresses which must be visible in any execution context.
So it is required to make these mappings visible to user space when kernel
page table isolation is active.
Provide enough room for the
From: Hugh Dickins
The BTS and PEBS buffers both have their virtual addresses programmed into
the hardware. This means that any access to them is performed via the page
tables. The times that the hardware accesses these are entirely dependent
on how the performance monitoring
From: Hugh Dickins
The BTS and PEBS buffers both have their virtual addresses programmed into
the hardware. This means that any access to them is performed via the page
tables. The times that the hardware accesses these are entirely dependent
on how the performance monitoring hardware events
From: Thomas Gleixner
init_espfix_bsp() needs to be invoked before the page table isolation
initialization. Move it into mm_init() which is the place where pti_init()
will be added.
While at it get rid of the #ifdeffery and provide proper stub functions.
Signed-off-by:
On Thu, Dec 21, 2017 at 12:05:40AM +0300, Dmitry Osipenko wrote:
> On 20.12.2017 23:16, Thierry Reding wrote:
> > On Wed, Dec 20, 2017 at 11:01:49PM +0300, Dmitry Osipenko wrote:
> >> On 20.12.2017 21:01, Thierry Reding wrote:
> >>> On Wed, Dec 20, 2017 at 06:46:11PM +0300, Dmitry Osipenko wrote:
From: Thomas Gleixner
init_espfix_bsp() needs to be invoked before the page table isolation
initialization. Move it into mm_init() which is the place where pti_init()
will be added.
While at it get rid of the #ifdeffery and provide proper stub functions.
Signed-off-by: Thomas Gleixner
---
On Thu, Dec 21, 2017 at 12:05:40AM +0300, Dmitry Osipenko wrote:
> On 20.12.2017 23:16, Thierry Reding wrote:
> > On Wed, Dec 20, 2017 at 11:01:49PM +0300, Dmitry Osipenko wrote:
> >> On 20.12.2017 21:01, Thierry Reding wrote:
> >>> On Wed, Dec 20, 2017 at 06:46:11PM +0300, Dmitry Osipenko wrote:
From: Andy Lutomirski
Make VSYSCALLs work fully in PTI mode.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
Cc: Kees Cook
Cc: Peter Zijlstra
Cc: Brian Gerst
From: Andy Lutomirski
Make VSYSCALLs work fully in PTI mode.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
Cc: Kees Cook
Cc: Peter Zijlstra
Cc: Brian Gerst
Cc: David Laight
Cc: Borislav Petkov
---
arch/x86/entry/vsyscall/vsyscall_64.c |6 +--
From: Andy Lutomirski
Shrink vmalloc space from 16384TiB to 12800TiB to enlarge the hole starting
at 0xff90 to be a full PGD entry.
A subsequent patch will use this hole for the pagetable isolation LDT
alias.
Signed-off-by: Andy Lutomirski
From: Andy Lutomirski
Shrink vmalloc space from 16384TiB to 12800TiB to enlarge the hole starting
at 0xff90 to be a full PGD entry.
A subsequent patch will use this hole for the pagetable isolation LDT
alias.
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
Cc: Kees
From: Peter Zijlstra
We can use PCID to retain the TLBs across CR3 switches; including those now
part of the user/kernel switch. This increases performance of kernel
entry/exit at the cost of more expensive/complicated TLB flushing.
Now that we have two address spaces, one
From: Dave Hansen
Kernel page table isolation requires to have two PGDs. One for the kernel,
which contains the full kernel mapping plus the user space mapping and one
for user space which contains the user space mappings and the minimal set
of kernel mappings which
401 - 500 of 1904 matches
Mail list logo