On Fri, 2020-06-05 at 12:54 +0800, Zhenyu Wang wrote:
> On 2020.06.03 14:33:21 +0200, Julian Stecklina wrote:
> > + gvt_err("vgpu%d: failed to allocate %s gm space from host\n",
> > + vgpu->id, high_gm ? "high" : "low");
&
: Zhenyu Wang
Signed-off-by: Julian Stecklina
---
drivers/gpu/drm/i915/gvt/aperture_gm.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/aperture_gm.c
b/drivers/gpu/drm/i915/gvt/aperture_gm.c
index 0d6d598713082..5c5c8e871dae2 100644
Baoquan He writes:
> On 01/30/19 at 05:40pm, Julian Stecklina wrote:
>> diff --git a/arch/x86/boot/compressed/kaslr.c
>> b/arch/x86/boot/compressed/kaslr.c
>> index 9ed9709..5657e34 100644
>> --- a/arch/x86/boot/compressed/kaslr.c
>> +++ b/arch/x86/boot/compre
Paolo Bonzini writes:
> Alternatively, it is probably a good time to switch the default to split
> irqchip
> in QEMU. Split irqchip was introduced in kernel 4.5, which was released about
> three years ago.
I totally agree. At some point, the in-kernel PIT/PIC emulation should
also be removed,
Borislav Petkov writes:
>> @@ -213,7 +213,7 @@ static void mem_avoid_memmap(char *str)
>> i++;
>> }
>>
>> -/* More than 4 memmaps, fail kaslr */
>> +/* Can't store all regions, fail kaslr */
>> if ((i >= MAX_MEMMAP_REGIONS) && str)
>>
From: Julian Stecklina
The boot code has a limit of 4 "non-standard" regions to avoid for
KASLR. This limit is easy to reach when supplying memmap= parameters to
the kernel. In this case, KASLR would be disabled.
Increase the limit to avoid turning off KASLR even when the user
From: Julian Stecklina
When the user passes a memmap=%-+
parameter to the kernel to reclassify some memory, this information is
ignored during the randomization of the kernel base address. This in
turn leads to cases where the kernel is unpacked to memory regions that
the user marked as reserved
Kees Cook writes:
> On Tue, Jan 22, 2019 at 8:15 AM Greg KH wrote:
>>
>> On Mon, Jan 21, 2019 at 10:36:18AM -0800, Andi Kleen wrote:
>> > > + /* Check the start address: needs to be page-aligned.. */
>> > > +- if (start & ~PAGE_MASK)
>> > > ++ if (start & ~PAGE_MASK) {
>> > > ++
>> > > ++
rvalds
Cc: x...@kernel.org
Cc: Kernel Hardening
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Julian Stecklina
---
...of-of-concept-cache-load-gadget-in-mincor.patch | 53 +++
tools/testing/l1tf/Makefile| 20 ++
tools/testing/l1tf/README.md
Khalid Aziz writes:
> From: Julian Stecklina
>
> Instead of using the page extension debug feature, encode all
> information, we need for XPFO in struct page. This allows to get rid of
> some checks in the hot paths and there are also no pages anymore that
> are allocated befo
Khalid Aziz writes:
> I am continuing to build on the work Juerg, Tycho and Julian have done
> on XPFO.
Awesome!
> A rogue process can launch a ret2dir attack only from a CPU that has
> dual mapping for its pages in physmap in its TLB. We can hence defer
> TLB flush on a CPU until a process
. As far as testing goes, the KVM
unit tests seem happy on Intel. AMD is only compile tested at the moment.
[1] git://git.kernel.org/pub/scm/virt/kvm/kvm.git
Julian Stecklina (6):
kvm, vmx: move CR2 context switch out of assembly path
kvm, vmx: move register clearing out of assembly path
mm
. As far as testing goes, the KVM
unit tests seem happy on Intel. AMD is only compile tested at the moment.
[1] git://git.kernel.org/pub/scm/virt/kvm/kvm.git
Julian Stecklina (6):
kvm, vmx: move CR2 context switch out of assembly path
kvm, vmx: move register clearing out of assembly path
mm
__kernel_map_pages is currently only enabled when CONFIG_DEBUG_PAGEALLOC
is defined. Enable it unconditionally instead.
Signed-off-by: Julian Stecklina
---
arch/x86/mm/pageattr.c | 3 +--
include/linux/mm.h | 3 ++-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm
PATCH] paravirt: header and stubs for
paravirtualisation")
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
Reviewed-by: Jim Mattson
Reviewed-by: Sean Christopherson
---
arch/x86/kvm/vmx.c | 15 +--
1 file changed, 5 insertions(+), 1
__kernel_map_pages is currently only enabled when CONFIG_DEBUG_PAGEALLOC
is defined. Enable it unconditionally instead.
Signed-off-by: Julian Stecklina
---
arch/x86/mm/pageattr.c | 3 +--
include/linux/mm.h | 3 ++-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm
PATCH] paravirt: header and stubs for
paravirtualisation")
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
Reviewed-by: Jim Mattson
Reviewed-by: Sean Christopherson
---
arch/x86/kvm/vmx.c | 15 +--
1 file changed, 5 insertions(+), 1
context in the kernel.
Signed-off-by: Julian Stecklina
---
arch/x86/include/asm/kvm_host.h | 10 +++-
arch/x86/kvm/x86.c | 42 ++---
2 files changed, 37 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm
context in the kernel.
Signed-off-by: Julian Stecklina
---
arch/x86/include/asm/kvm_host.h | 10 +++-
arch/x86/kvm/x86.c | 42 ++---
2 files changed, 37 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm
General-purpose registers (GPRs) contain guest data and must be protected
from information leak vulnerabilities in the kernel.
Move GPRs into process local memory and change the VMX and SVM world
switch and related code accordingly.
Note: Only Intel VMX support is tested.
Signed-off-by: Julian
General-purpose registers (GPRs) contain guest data and must be protected
from information leak vulnerabilities in the kernel.
Move GPRs into process local memory and change the VMX and SVM world
switch and related code accordingly.
Note: Only Intel VMX support is tested.
Signed-off-by: Julian
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
limitation and
can be lifted by working on the page table allocation code.
While memory is used for process-local allocations, it is unmapped from
the linear mapping of physical memory.
The code has some limitations that are spelled out in
arch/x86/mm/proclocal.c.
Signed-off-by: Julian Stecklina
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
limitation and
can be lifted by working on the page table allocation code.
While memory is used for process-local allocations, it is unmapped from
the linear mapping of physical memory.
The code has some limitations that are spelled out in
arch/x86/mm/proclocal.c.
Signed-off-by: Julian Stecklina
PATCH] paravirt: header and stubs for
paravirtualisation")
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
Reviewed-by: Jim Mattson
Reviewed-by: Sean Christopherson
---
arch/x86/kvm/vmx.c | 15 +--
1 file changed, 5 insertions(+), 1
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
PATCH] paravirt: header and stubs for
paravirtualisation")
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
Reviewed-by: Jim Mattson
Reviewed-by: Sean Christopherson
---
arch/x86/kvm/vmx.c | 15 +--
1 file changed, 5 insertions(+), 1
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
The code violated the coding style. Fixed by using tabs instead of
spaces. There are only whitespace changes here.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
---
arch/x86/kvm/vmx.c | 20 ++--
1 file changed, 10 insertions
The code violated the coding style. Fixed by using tabs instead of
spaces. There are only whitespace changes here.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
---
arch/x86/kvm/vmx.c | 20 ++--
1 file changed, 10 insertions
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
The code violated the coding style. Fixed by using tabs instead of
spaces. There are only whitespace changes here.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
---
arch/x86/kvm/vmx.c | 20 ++--
1 file changed, 10 insertions
The code violated the coding style. Fixed by using tabs instead of
spaces. There are only whitespace changes here.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
---
arch/x86/kvm/vmx.c | 20 ++--
1 file changed, 10 insertions
The VM entry/exit path is a giant inline assembly statement. Simplify it
by doing CR2 context switching in plain C. Move CR2 restore behind IBRS
clearing, so we reduce the amount of code we execute with IBRS on.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad
The VM entry/exit path is a giant inline assembly statement. Simplify it
by doing CR2 context switching in plain C. Move CR2 restore behind IBRS
clearing, so we reduce the amount of code we execute with IBRS on.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
Split the security related register clearing out of the large inline
assembly VM entry path. This results in two slightly less complicated
inline assembly statements, where it is clearer what each one does.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan
The code violated the coding style. Fixed by using tabs instead of
spaces. There are only whitespace changes here.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
---
arch/x86/kvm/vmx.c | 22 +++---
1 file changed, 11 insertions
The code violated the coding style. Fixed by using tabs instead of
spaces. There are only whitespace changes here.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
---
arch/x86/kvm/vmx.c | 22 +++---
1 file changed, 11 insertions
with the proper
inline assembly. This improves code generation (and source code
readability).
According to the bloat-o-meter this change removes ~1300 bytes from the
text segment.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
Reviewed-by: Razvan-Alin
The VM entry/exit path is a giant inline assembly statement. Simplify it
by doing CR2 context switching in plain C. Move CR2 restore behind IBRS
clearing, so we reduce the amount of code we execute with IBRS on.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad
with the proper
inline assembly. This improves code generation (and source code
readability).
According to the bloat-o-meter this change removes ~1300 bytes from the
text segment.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad Jan Miller
Reviewed-by: Razvan-Alin
The VM entry/exit path is a giant inline assembly statement. Simplify it
by doing CR2 context switching in plain C. Move CR2 restore behind IBRS
clearing, so we reduce the amount of code we execute with IBRS on.
Signed-off-by: Julian Stecklina
Reviewed-by: Jan H. Schönherr
Reviewed-by: Konrad
Juerg Haefliger writes:
>> I've updated my XPFO branch[1] to make some of the debugging optional
>> and also integrated the XPFO bookkeeping with struct page, instead of
>> requiring CONFIG_PAGE_EXTENSION, which removes some checks in the hot
>> path.
>
> FWIW, that was my original design but
Juerg Haefliger writes:
>> I've updated my XPFO branch[1] to make some of the debugging optional
>> and also integrated the XPFO bookkeeping with struct page, instead of
>> requiring CONFIG_PAGE_EXTENSION, which removes some checks in the hot
>> path.
>
> FWIW, that was my original design but
Khalid Aziz writes:
> I ran tests with your updated code and gathered lock statistics. Change in
> system time for "make -j60" was in the noise margin (It actually went up by
> about 2%). There is some contention on xpfo_lock. Average wait time does not
> look high compared to other locks. Max
Khalid Aziz writes:
> I ran tests with your updated code and gathered lock statistics. Change in
> system time for "make -j60" was in the noise margin (It actually went up by
> about 2%). There is some contention on xpfo_lock. Average wait time does not
> look high compared to other locks. Max
Julian Stecklina writes:
> Linus Torvalds writes:
>
>> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote:
>>>
>>> I've been spending some cycles on the XPFO patch set this week. For the
>>> patch set as it was posted for v4.13, the performance o
Julian Stecklina writes:
> Linus Torvalds writes:
>
>> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote:
>>>
>>> I've been spending some cycles on the XPFO patch set this week. For the
>>> patch set as it was posted for v4.13, the performance o
Andi Kleen writes:
> On Sat, Sep 01, 2018 at 02:38:43PM -0700, Linus Torvalds wrote:
>> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote:
>> >
>> > I've been spending some cycles on the XPFO patch set this week. For the
>> > patch set as it was posted
Andi Kleen writes:
> On Sat, Sep 01, 2018 at 02:38:43PM -0700, Linus Torvalds wrote:
>> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote:
>> >
>> > I've been spending some cycles on the XPFO patch set this week. For the
>> > patch set as it was posted
Linus Torvalds writes:
> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote:
>>
>> I've been spending some cycles on the XPFO patch set this week. For the
>> patch set as it was posted for v4.13, the performance overhead of
>> compiling a Linux kernel is ~40%
Linus Torvalds writes:
> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote:
>>
>> I've been spending some cycles on the XPFO patch set this week. For the
>> patch set as it was posted for v4.13, the performance overhead of
>> compiling a Linux kernel is ~40%
Hey everyone,
On Mon, 20 Aug 2018 15:27 Linus Torvalds wrote:
> On Mon, Aug 20, 2018 at 3:02 PM Woodhouse, David wrote:
>>
>> It's the *kernel* we don't want being able to access those pages,
>> because of the multitude of unfixable cache load gadgets.
>
> Ahh.
>
> I guess the proof is in the
Hey everyone,
On Mon, 20 Aug 2018 15:27 Linus Torvalds wrote:
> On Mon, Aug 20, 2018 at 3:02 PM Woodhouse, David wrote:
>>
>> It's the *kernel* we don't want being able to access those pages,
>> because of the multitude of unfixable cache load gadgets.
>
> Ahh.
>
> I guess the proof is in the
and
(correctly) return EFAULT to the user with a helpful warning message in the
kernel
log.
Signed-off-by: Julian Stecklina
Acked-by: Alex Williamson
---
drivers/iommu/intel-iommu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu
and
(correctly) return EFAULT to the user with a helpful warning message in the
kernel
log.
Signed-off-by: Julian Stecklina jstec...@os.inf.tu-dresden.de
Acked-by: Alex Williamson alex.william...@redhat.com
---
drivers/iommu/intel-iommu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff
and
(correctly) return EFAULT to the user with a helpful warning message in the
kernel
log.
Signed-off-by: Julian Stecklina
---
drivers/iommu/intel-iommu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index eec0d3e
and
(correctly) return EFAULT to the user with a helpful warning message in the
kernel
log.
Signed-off-by: Julian Stecklina jstec...@os.inf.tu-dresden.de
---
drivers/iommu/intel-iommu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu
61 matches
Mail list logo