From: Joerg Roedel
Allocate and map enough stacks for the #VC handler to support sufficient
levels of nesting and the NMI-in-#VC scenario.
Also setup the IST entrys for the #VC handler on all CPUs because #VC
needs to work before cpu_init() has set up the per-cpu TSS.
Signed-off-by: Joerg
From: Joerg Roedel
The functions are needed to map the GHCB for SEV-ES guests. The GHCB is
used for communication with the hypervisor, so its content must not be
encrypted. After the GHCB is not needed anymore it must be mapped
encrypted again so that the running kernel image can safely re-use
register. For now, cache the value written to DR7 and return it on
read attempts, but do not touch the real hardware DR7.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: - Adapt to #VC handling framework
- Support early usage ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg
From: Tom Lendacky
Implement a handler for #VC exceptions caused by RDPMC instructions.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling infrastructure ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 22 ++
1
[ jroe...@suse.de: - Adapt to #VC handling infrastructure
- Make it available early ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/boot/compressed/sev-es.c | 4
arch/x86/kernel/sev-es-shared.c | 23 +++
arch/x86/kernel/sev-es.c
From: Tom Lendacky
Add handler for VC exceptions caused by MMIO intercepts. These
intercepts come along as nested page faults on pages with reserved
bits set.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to VC handling framework ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg
From: Tom Lendacky
Handle #VC exceptions caused by CPUID instructions. These happen in
early boot code when the KASLR code checks for RDTSC.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling framework ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch
From: Joerg Roedel
Send SIGBUS to the user-space process that caused the #VC exception
instead of killing the machine. Also ratelimit the error messages so
that user-space can't flood the kernel log and add a prefix the the
messages printed for SEV-ES.
Signed-off-by: Joerg Roedel
---
arch/x86
From: Tom Lendacky
Implement a handler for #VC exceptions caused by RDMSR/WRMSR
instructions.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling infrastructure ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 28
From: Tom Lendacky
Implement a handler for #VC exceptions caused by INVD instructions.
Since Linux should never use INVD, just mark it as unsupported.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling infrastructure ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg
From: Tom Lendacky
Implement a handler for #VC exceptions caused by MONITOR and MONITORX
instructions.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling infrastructure ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 19
From: Tom Lendacky
Implement a handler for #VC exceptions caused by MWAIT and MWAITX
instructions.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling infrastructure ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 12
handling infrastructure ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 23 +++
1 file changed, 23 insertions(+)
diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index d5d4804d1e17..f807a2adcbe3 100644
--- a/arch/x86
From: Tom Lendacky
Implement the callbacks to copy the processor state required by KVM to
the GHCB.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: - Split out of a larger patch
- Adapt to different callback functions ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg
into vc_handle_cpuid_cached()
- Used lower_32_bits() where applicable
- Moved cache_index out of struct es_em_ctxt ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es-shared.c | 12 ++--
arch/x86/kernel/sev-es.c| 119
From: Joerg Roedel
Add two new paravirt callbacks to provide hypervisor specific processor
state in the GHCB and to copy state from the hypervisor back to the
processor.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/x86_init.h | 16 +++-
arch/x86/kernel/sev-es.c| 12
From: Joerg Roedel
Implement a handler for #VC exceptions caused by #AC exceptions. The #AC
exception is just forwarded to do_alignment_check() and not pushed down
to the hypervisor, as requested by the SEV-ES GHCB Standardization
Specification.
Signed-off-by: Joerg Roedel
---
arch/x86/kernel
From: Doug Covelli
This change adds VMware specific handling for #VC faults caused by
VMMCALL instructions.
Signed-off-by: Doug Covelli
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: - Adapt to different paravirt interface ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
From: Joerg Roedel
For SEV-ES this entry point will be used for restarting APs after they
have been offlined. Remove the '0' from the name to reflect that.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/cpu.h | 2 +-
arch/x86/kernel/head_32.S | 4 ++--
arch/x86/kernel/head_64.S | 6
From: Joerg Roedel
The code at the trampoline entry point is executed in real-mode. In
real-mode #VC exceptions can't be handled, so anything that might cause
such an exception must be avoided.
In the standard trampoline entry code this is the WBINVD instruction and
the call to verify_cpu
From: Joerg Roedel
Setup an early handler for #VC exceptions. There is no GHCB mapped
yet, so just re-use the vc_no_ghcb_handler. It can only handle CPUID
exit-codes, but that should be enough to get the kernel through
verify_cpu() and __startup_64() until it runs on virtual addresses.
Signed
From: Joerg Roedel
Add handling for emulation the MOVS instruction on MMIO regions, as done
by the memcpy_toio() and memcpy_fromio() functions.
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 78
1 file changed, 78 insertions(+)
diff --git
From: Joerg Roedel
The APs are not ready to handle exceptions when verify_cpu() is called
in secondary_startup_64.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/realmode.h | 1 +
arch/x86/kernel/head_64.S | 1 +
arch/x86/realmode/init.c| 6 ++
3 files changed, 8
From: Tom Lendacky
Implement a handler for #VC exceptions caused by WBINVD instructions.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Adapt to #VC handling framework ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 9 +
1 file changed, 9
From: Joerg Roedel
Load the IDT right after switching to virtual addresses in head_64.S
so that the kernel can handle #VC exceptions.
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/head64.c | 15 +++
arch/x86/kernel/head_64.S | 17 +
2 files changed, 32
From: Joerg Roedel
Handle #VC exceptions caused by #DB exceptions in the guest. Do not
forward them to the hypervisor and handle them with do_debug() instead.
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/arch
From: Joerg Roedel
Add a play_dead handler when running under SEV-ES. This is needed
because the hypervisor can't deliver an SIPI request to restart the AP.
Instead the kernel has to issue a VMGEXIT to halt the VCPU. When the
hypervisor would deliver and SIPI is wakes up the VCPU instead
From: Joerg Roedel
Re-use the handlers for CPUID and IOIO caused #VC exceptions in the
early boot handler.
Signed-off-by: Joerg Roedel
---
arch/x86/kernel/sev-es-shared.c | 7 +++
arch/x86/kernel/sev-es.c| 6 ++
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git
From: Joerg Roedel
Move the assembly coded dispatch between page-faults and all other
exceptions to C code to make it easier to maintain and extend.
Also change the return-type of early_make_pgtable() to bool and make it
static.
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/pgtable.h
page faults.
Signed-off-by: Tom Lendacky
[ jroe...@suse.de: Moved GHCB mapping loop to sev-es.c ]
Signed-off-by: Joerg Roedel
---
arch/x86/boot/compressed/sev-es.c | 1 +
arch/x86/include/asm/sev-es.h | 5 +
arch/x86/kernel/sev-es.c | 25 +
arch/x86
code
- Fix sparse warnings ]
Co-developed-by: Joerg Roedel
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/sev-es.h | 6 +++
arch/x86/include/uapi/asm/svm.h | 3 ++
arch/x86/kernel/sev-es.c| 66 +
arch/x86/realmode/init.c
From: Joerg Roedel
Make sure there is a stack once the kernel runs from virual addresses.
At this stage any secondary CPU which boots will have lost its stack
because the kernel switched to a new page-table which does not map the
real-mode stack anymore.
This is needed for handling early #VC
From: Joerg Roedel
The #VC exception will trigger very early in head_64.S, when the first
CPUID instruction is executed. When secondary CPUs boot, they already
load the real system IDT, which has the #VC handler configured to be
using an IST stack. IST stacks require a TSS to be loaded, to set
From: Joerg Roedel
When running under SEV-ES the kernel has to tell the hypervisor when to
open the NMI window again after an NMI was injected. This is done with
an NMI-complete message to the hypervisor.
Add code to the kernels NMI handler to send this message right at the
beginning of do_nmi
From: Tom Lendacky
Extend the vmcb_safe_area with SEV-ES fields and add a new
'struct ghcb' which will be used for guest-hypervisor communication.
Signed-off-by: Tom Lendacky
Signed-off-by: Joerg Roedel
---
arch/x86/include/asm/svm.h | 42 ++
1 file
On Mon, Apr 27, 2020 at 10:37:41AM -0700, Andy Lutomirski wrote:
> I have a somewhat serious question: should we use IST for #VC at all?
> As I understand it, Rome and Naples make it mandatory for hypervisors
> to intercept #DB, which means that, due to the MOV SS mess, it's sort
> of mandatory to
On Fri, Oct 18, 2019 at 05:14:53PM +0200, Christoph Hellwig wrote:
> On Fri, Oct 18, 2019 at 11:50:37AM +0200, Joerg Roedel wrote:
> > On Thu, Oct 17, 2019 at 09:08:47AM +0200, Christoph Hellwig wrote:
> > > On Wed, Oct 16, 2019 at 03:15:52PM -0400, Arvind Sankar wrote:
&g
oph, will you be taking this through your dma-mapping branch?
>
> Given this is a patch to intel-iommu I expect Joerg to pick it up.
> But if he is fine with that I can also queue it up instead.
Fine with me.
Acked-by: Joerg Roedel
On Thu, Oct 17, 2019 at 10:39:13AM -0400, Qian Cai wrote:
> On Wed, 2019-10-16 at 17:44 +0200, Joerg Roedel wrote:
> > On Wed, Oct 16, 2019 at 10:59:42AM -0400, Qian Cai wrote:
> > > BTW, the previous x86 warning was from only reverted one patch "iommu:
> > > Add
From: Joerg Roedel
After enabling CONFIG_IOMMU_DMA on X86 a new warning appears when
compiling vfio:
drivers/vfio/vfio_iommu_type1.c: In function ‘vfio_iommu_type1_attach_group’:
drivers/vfio/vfio_iommu_type1.c:1827:7: warning: ‘resv_msi_base’ may be used
uninitialized in this function
On Wed, Oct 09, 2019 at 07:59:33PM +0800, Yong Wu wrote:
> In the commit 4f0a1a1ae351 ("memory: mtk-smi: Invoke pm runtime_callback
> to enable clocks"), we use pm_runtime callback to enable/disable the smi
> larb clocks. It will cause the larb's clock may not be disabled when
> suspend. That is
. Feel free to apply
this series to your tree with my:
Reviewed-by: Joerg Roedel
Acked-by: Joerg Roedel
On Sat, Sep 21, 2019 at 03:06:44PM +0800, Lu Baolu wrote:
> Current find_domain() helper checks and does the deferred domain
> attachment and return the domain in use. This isn't always the
> use case for the callers. Some callers only want to retrieve the
> current domain in use.
>
> This
From: Joerg Roedel
Git commit 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in
__purge_vmap_area_lazy()") introduced a call to vmalloc_sync_all() in
the vunmap() code-path. While this change was necessary to maintain
correctness on x86-32-pae kernels, it also adds additio
Hi Dave,
thanks for your review!
On Mon, Oct 07, 2019 at 08:30:51AM -0700, Dave Hansen wrote:
> On 10/7/19 8:16 AM, Joerg Roedel wrote:
> > @@ -318,7 +328,7 @@ static void dump_pagetable(unsigned long address)
> >
> > #else /* CONFIG_X86_64: */
> >
> > -void
From: Joerg Roedel
Git commit 3f8fd02b1bf1 ("mm/vmalloc: Sync unmappings in
__purge_vmap_area_lazy()") introduced a call to vmalloc_sync_all() in
the vunmap() code-path. While this change was necessary to maintain
correctness on x86-32-pae kernels, it also adds additio
On Wed, Sep 25, 2019 at 05:27:32PM +0200, Jiri Kosina wrote:
> On Sat, 21 Sep 2019, Kurt Garloff wrote:
> > [12916.740274] mmc0: sdhci:
> > [12916.740337] mmc0: error -5 whilst initialising MMC card
>
> Do you have BAR memory allocation failures in
On Fri, Sep 06, 2019 at 02:14:47PM +0800, Lu Baolu wrote:
> Lu Baolu (5):
> swiotlb: Split size parameter to map/unmap APIs
> iommu/vt-d: Check whether device requires bounce buffer
> iommu/vt-d: Don't switch off swiotlb if bounce page is used
> iommu/vt-d: Add trace events for device dma
hardware.
* Two fixes for AMD IOMMU driver to fix a race condition and to
add a missing IOTLB flush when kernel is booted in kdump mode.
Jacob Pan (1):
iommu/vt-d: Remove global page flush support
Joerg
ice is eventually added to a guest, and the
> > referenced commit below doesn't remove that call.
>
> I have done that for today:
Thanks Stephen and Tom. I queued the attached patch into the iommu tree
to fix the problem.
>From 2896ba40d0becdb72b45f096cad70633abc014f6 Mon Sep 17 00:00:00
Hi,
tl;dr: And IOMMU commit introduces a new user for sme_active() in
generic code, and commit
284e21fab2cf x86, s390/mm: Move sme_active() and sme_me_mask to
x86-specific header
breaks the build of drivers/iommu/ for all architectures not
implementing
> /*
>* We need to clone everything (again) that maps parts of the
> * kernel image.
>
Reviewed-by: Joerg Roedel
pmd_none(*pmd)) {
> - addr += PMD_SIZE;
> + WARN_ON_ONCE(addr & ~PMD_MASK);
> + addr = round_up(addr + 1, PMD_SIZE);
> continue;
> }
>
Reviewed-by: Joerg Roedel
On Fri, Aug 23, 2019 at 03:17:29PM +0800, Lu Baolu wrote:
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -4569,9 +4569,6 @@ static int __init platform_optin_force_iommu(void)
> iommu_identity_mapping |= IDENTMAP_ALL;
>
> dmar_disabled = 0;
> -#if
On Wed, Aug 21, 2019 at 01:10:04PM +0800, Kai-Heng Feng wrote:
> drivers/iommu/Makefile | 2 +-
> drivers/iommu/amd_iommu.h| 14 +
> drivers/iommu/amd_iommu_init.c | 5 +-
> drivers/iommu/amd_iommu_quirks.c | 92
> 4 files changed, 111
Hi Jacob,
On Tue, Aug 20, 2019 at 02:21:08PM -0700, Jacob Pan wrote:
> Global pages support is removed from VT-d spec 3.0. Since global pages G
> flag only affects first-level paging structures and because DMA request
> with PASID are only supported by VT-d spec. 3.0 and onward, we can
> safely
On Mon, Aug 19, 2019 at 03:22:45PM +0200, Joerg Roedel wrote:
> Joerg Roedel (11):
> iommu: Remember when default domain type was set on kernel command line
> iommu: Add helpers to set/get default domain type
> iommu: Use Functions to set default domain type in
> iommu_set_
From: Joerg Roedel
Hi,
This patch-set started out small to overwrite the default passthrough
setting (through CONFIG_IOMMU_DEFAULT_PASSTHROUGH=y) when SME is active.
But on the way to that Tom reminded me that the current ways to
configure passthrough/no-passthrough modes for IOMMU on x86
Hey Lu Baolu,
thanks for your review!
On Thu, Aug 15, 2019 at 01:01:57PM +0800, Lu Baolu wrote:
> > +#define IOMMU_CMD_LINE_DMA_API (1 << 0)
>
> Prefer BIT() micro?
Yes, I'll change that.
> > + iommu_set_cmd_line_dma_api();
>
> IOMMU command line is also set in other places,
Hi Greg,
On Tue, Aug 13, 2019 at 08:36:42PM +0200, Greg Kroah-Hartman wrote:
> On Tue, Aug 13, 2019 at 05:28:11PM +0200, Joerg Roedel wrote:
> > From: Joerg Roedel
> >
> > Backport commits from upstream to fix a data corruption
> > issue that gets expos
From: Joerg Roedel
commit 3f8fd02b1bf1d7ba964485a56f2f4b53ae88c167 upstream.
On x86-32 with PTI enabled, parts of the kernel page-tables are not shared
between processes. This can cause mappings in the vmalloc/ioremap area to
persist in some page-tables after the region is unmapped and released
From: Joerg Roedel
Backport commits from upstream to fix a data corruption
issue that gets exposed when using PTI on x86-32.
Please consider them for inclusion into stable-4.19.
Joerg Roedel (3):
x86/mm: Check for pfn instead of page in vmalloc_sync_one()
x86/mm: Sync also unmappings
From: Joerg Roedel
commit 3f8fd02b1bf1d7ba964485a56f2f4b53ae88c167 upstream.
On x86-32 with PTI enabled, parts of the kernel page-tables are not shared
between processes. This can cause mappings in the vmalloc/ioremap area to
persist in some page-tables after the region is unmapped and released
From: Joerg Roedel
commit 8e998fc24de47c55b47a887f6c95ab91acd4a720 upstream.
With huge-page ioremap areas the unmappings also need to be synced between
all page-tables. Otherwise it can cause data corruption when a region is
unmapped and later re-used.
Make the vmalloc_sync_one() function
From: Joerg Roedel
Backport commits from upstream to fix a data corruption
issue that gets exposed when using PTI on x86-32.
Please consider them for inclusion into stable-5.2.
Joerg Roedel (3):
x86/mm: Check for pfn instead of page in vmalloc_sync_one()
x86/mm: Sync also unmappings
From: Joerg Roedel
commit 51b75b5b563a2637f9d8dc5bd02a31b2ff9e5ea0 upstream.
Do not require a struct page for the mapped memory location because it
might not exist. This can happen when an ioremapped region is mapped with
2MB pages.
Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping
From: Joerg Roedel
commit 8e998fc24de47c55b47a887f6c95ab91acd4a720 upstream.
With huge-page ioremap areas the unmappings also need to be synced between
all page-tables. Otherwise it can cause data corruption when a region is
unmapped and later re-used.
Make the vmalloc_sync_one() function
From: Joerg Roedel
commit 51b75b5b563a2637f9d8dc5bd02a31b2ff9e5ea0 upstream.
Do not require a struct page for the mapped memory location because it
might not exist. This can happen when an ioremapped region is mapped with
2MB pages.
Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping
lp maintain kvm/queue
> while I am on vacation. Since not much is going to change, I will let
> him decide whether he wants to keep the maintainer role after he leaves.
>
> Cc: Sean Christopherson
> Cc: Vitaly Kuznetsov
> Cc: Wanpeng Li
> Cc: Jim Mattson
> Cc: Joerg Roedel
On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote:
> @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain
> *iovad,
> {
> }
>
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> + return false;
> +}
> +
This needs to be 'static inline', I
On Mon, Jul 22, 2019 at 10:19:32AM +0200, Thomas Gleixner wrote:
> On Mon, 22 Jul 2019, Joerg Roedel wrote:
>
> > Srewed up the subject :(, it needs to be
>
> Un-Srewed it :)
Thanks a lot :)
Commit-ID: 3f8fd02b1bf1d7ba964485a56f2f4b53ae88c167
Gitweb: https://git.kernel.org/tip/3f8fd02b1bf1d7ba964485a56f2f4b53ae88c167
Author: Joerg Roedel
AuthorDate: Fri, 19 Jul 2019 20:46:52 +0200
Committer: Thomas Gleixner
CommitDate: Mon, 22 Jul 2019 10:18:30 +0200
mm/vmalloc: Sync
Commit-ID: 8e998fc24de47c55b47a887f6c95ab91acd4a720
Gitweb: https://git.kernel.org/tip/8e998fc24de47c55b47a887f6c95ab91acd4a720
Author: Joerg Roedel
AuthorDate: Fri, 19 Jul 2019 20:46:51 +0200
Committer: Thomas Gleixner
CommitDate: Mon, 22 Jul 2019 10:18:30 +0200
x86/mm: Sync also
Commit-ID: 51b75b5b563a2637f9d8dc5bd02a31b2ff9e5ea0
Gitweb: https://git.kernel.org/tip/51b75b5b563a2637f9d8dc5bd02a31b2ff9e5ea0
Author: Joerg Roedel
AuthorDate: Fri, 19 Jul 2019 20:46:50 +0200
Committer: Thomas Gleixner
CommitDate: Mon, 22 Jul 2019 10:18:30 +0200
x86/mm: Check for pfn
Srewed up the subject :(, it needs to be
"mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()"
of course.
On Fri, Jul 19, 2019 at 08:46:52PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> On x86-32 with PTI enabled, parts of the kernel page-tables
>
From: Joerg Roedel
With huge-page ioremap areas the unmappings also need to be
synced between all page-tables. Otherwise it can cause data
corruption when a region is unmapped and later re-used.
Make the vmalloc_sync_one() function ready to sync
unmappings and make sure vmalloc_sync_all
From: Joerg Roedel
On x86-32 with PTI enabled, parts of the kernel page-tables
are not shared between processes. This can cause mappings in
the vmalloc/ioremap area to persist in some page-tables
after the region is unmapped and released.
When the region is re-used the processes with the old
From: Joerg Roedel
Do not require a struct page for the mapped memory location
because it might not exist. This can happen when an
ioremapped region is mapped with 2MB pages.
Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F')
Reviewed-by: Dave Hansen
Signed-off-by: Joerg
all()
really iterates over all pgds (pointed out by
Thomas Gleixner)
- Added a couple of comments
Changes v1 -> v2:
- Added correct Fixes-tags to all patches
Joerg Roedel (3):
x86/mm: Check for pfn instead of page in vmalloc_sync_one()
x86/mm: Syn
On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote:
> Joerg,
>
> On Thu, 18 Jul 2019, Joerg Roedel wrote:
> > On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote:
> > > On Wed, 17 Jul 2019, Joerg Roedel wrote:
> > > > +
>
On Fri, Jul 19, 2019 at 05:24:03AM -0700, Andy Lutomirski wrote:
> Could you move the vmalloc_sync_all() call to the lazy purge path,
> though? If nothing else, it will cause it to be called fewer times
> under any given workload, and it looks like it could be rather slow on
> x86_32.
Okay, I
On Thu, Jul 18, 2019 at 12:04:49PM -0700, Andy Lutomirski wrote:
> I find it problematic that there is no meaningful documentation as to
> what vmalloc_sync_all() is supposed to do.
Yeah, I found that too, there is no real design around
vmalloc_sync_all(). It looks like it was just added to fit
On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote:
> On Thu, 18 Jul 2019, Joerg Roedel wrote:
> > No, you are right, I missed that. It is a bug in this patch, the code
> > that breaks out of the loop in vmalloc_sync_all() needs to be removed as
> > well. Wil
Hi Andy,
On Wed, Jul 17, 2019 at 02:24:09PM -0700, Andy Lutomirski wrote:
> On Wed, Jul 17, 2019 at 12:14 AM Joerg Roedel wrote:
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 4fa8d84599b0..322b11a374fd 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
>
Hi Thomas,
On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote:
> On Wed, 17 Jul 2019, Joerg Roedel wrote:
> > +
> > + if (!pmd_present(*pmd_k))
> > + return NULL;
> > else
> > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
>
Hi Dave,
On Wed, Jul 17, 2019 at 02:06:01PM -0700, Dave Hansen wrote:
> On 7/17/19 12:14 AM, Joerg Roedel wrote:
> > - if (!pmd_present(*pmd))
> > + if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
> > set_pmd(pmd, *pmd_k);
>
> Wouldn't:
>
From: Joerg Roedel
On x86-32 with PTI enabled, parts of the kernel page-tables
are not shared between processes. This can cause mappings in
the vmalloc/ioremap area to persist in some page-tables
after the regions is unmapped and released.
When the region is re-used the processes with the old
From: Joerg Roedel
Do not require a struct page for the mapped memory location
because it might not exist. This can happen when an
ioremapped region is mapped with 2MB pages.
Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel
---
arch/x86/mm
page-tables, causing data corruption and
other undefined behavior.
Please review.
Thanks,
Joerg
Changes since v1:
- Added correct Fixes-tags to all patches
Joerg Roedel (3):
x86/mm: Check for pfn instead of page in vmalloc_sync_one()
x86/mm: Sync also unmappings
From: Joerg Roedel
With huge-page ioremap areas the unmappings also need to be
synced between all page-tables. Otherwise it can cause data
corruption when a region is unmapped and later re-used.
Make the vmalloc_sync_one() function ready to sync
unmappings.
Fixes: 5d72b4fba40ef ('x86, mm
On Mon, Jul 15, 2019 at 03:08:42PM +0200, Thomas Gleixner wrote:
> On Mon, 15 Jul 2019, Joerg Roedel wrote:
>
> > From: Joerg Roedel
> >
> > Do not require a struct page for the mapped memory location
> > because it might not exist. This can happen when an
&
page-tables, causing data corruption and
other undefined behavior.
Please review.
Thanks,
Joerg
Joerg Roedel (3):
x86/mm: Check for pfn instead of page in vmalloc_sync_one()
x86/mm: Sync also unmappings in vmalloc_sync_one()
mm/vmalloc: Sync unmappings in vunmap_page_range()
arch
From: Joerg Roedel
On x86-32 with PTI enabled, parts of the kernel page-tables
are not shared between processes. This can cause mappings in
the vmalloc/ioremap area to persist in some page-tables
after the regions is unmapped and released.
When the region is re-used the processes with the old
From: Joerg Roedel
With huge-page ioremap areas the unmappings also need to be
synced between all page-tables. Otherwise it can cause data
corruption when a region is unmapped and later re-used.
Make the vmalloc_sync_one() function ready to sync
unmappings.
Signed-off-by: Joerg Roedel
From: Joerg Roedel
Do not require a struct page for the mapped memory location
because it might not exist. This can happen when an
ioremapped region is mapped with 2MB pages.
Signed-off-by: Joerg Roedel
---
arch/x86/mm/fault.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
On Tue, Jul 02, 2019 at 11:23:34AM -0400, Michael S. Tsirkin wrote:
> I can drop virtio iommu from my tree. Where's yours? I'd like to take a
> last look and send an ack.
It is not in my tree yet, because I was waiting for your ack on the
patches wrt. the spec.
Given that the merge window is
On Tue, Jul 02, 2019 at 03:18:03PM +0100, Jean-Philippe Brucker wrote:
> Nathan, thanks for noticing and fixing this.
>
> Joerg, the virtio-iommu driver build failed in next because of a
> dependency on driver-core changes for v5.3. I'm not sure what the best
> practice is in this case, I guess I
On Tue, Jul 02, 2019 at 01:03:22PM +0100, Will Deacon wrote:
> Joerg -- please can you take this on top of the SMMUv3 patches queued
> for 5.3?
Applied, thanks.
On Mon, Jun 24, 2019 at 01:17:42PM -0700, Jacob Pan wrote:
> drivers/iommu/intel_irq_remapping.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
Applied, thanks.
On Mon, Jun 03, 2019 at 10:05:19AM -0400, Qian Cai wrote:
> The commit "iommu/vt-d: Probe DMA-capable ACPI name space devices"
> introduced a compilation warning due to the "iommu" variable in
> for_each_active_iommu() but never used the for each element, i.e,
> "drhd->iommu".
>
>
1101 - 1200 of 6605 matches
Mail list logo