Re: [RFC PATCH v4 19/28] swiotlb: Add warnings for use of bounce buffers with SME

2017-02-17 Thread Tom Lendacky

On 2/17/2017 9:59 AM, Konrad Rzeszutek Wilk wrote:

On Thu, Feb 16, 2017 at 09:46:19AM -0600, Tom Lendacky wrote:

Add warnings to let the user know when bounce buffers are being used for
DMA when SME is active.  Since the bounce buffers are not in encrypted
memory, these notifications are to allow the user to determine some
appropriate action - if necessary.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |   11 +++
 include/linux/dma-mapping.h|   11 +++
 include/linux/mem_encrypt.h|6 ++
 lib/swiotlb.c  |3 +++
 4 files changed, 31 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 87e816f..5a17f1b 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -26,6 +26,11 @@ static inline bool sme_active(void)
return (sme_me_mask) ? true : false;
 }

+static inline u64 sme_dma_mask(void)
+{
+   return ((u64)sme_me_mask << 1) - 1;
+}
+
 void __init sme_early_encrypt(resource_size_t paddr,
  unsigned long size);
 void __init sme_early_decrypt(resource_size_t paddr,
@@ -53,6 +58,12 @@ static inline bool sme_active(void)
 {
return false;
 }
+
+static inline u64 sme_dma_mask(void)
+{
+   return 0ULL;
+}
+
 #endif

 static inline void __init sme_early_encrypt(resource_size_t paddr,
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 10c5a17..130bef7 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -10,6 +10,7 @@
 #include 
 #include 
 #include 
+#include 

 /**
  * List of possible attributes associated with a DMA mapping. The semantics
@@ -557,6 +558,11 @@ static inline int dma_set_mask(struct device *dev, u64 
mask)

if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
+
+   if (sme_active() && (mask < sme_dma_mask()))
+   dev_warn(dev,
+"SME is active, device will require DMA bounce 
buffers\n");


You can make it one line. But I am wondering if you should use
printk_ratelimit as this may fill the console up.


I thought the use of dma_set_mask() was mostly a one time probe/setup
thing so I didn't think we would get that many of these messages. If
dma_set_mask() is called much more often that that I can change this
to a printk_ratelimit().  I'll look into it further.




+
*dev->dma_mask = mask;
return 0;
 }
@@ -576,6 +582,11 @@ static inline int dma_set_coherent_mask(struct device 
*dev, u64 mask)
 {
if (!dma_supported(dev, mask))
return -EIO;
+
+   if (sme_active() && (mask < sme_dma_mask()))
+   dev_warn(dev,
+"SME is active, device will require DMA bounce 
buffers\n");


Ditto.

+
dev->coherent_dma_mask = mask;
return 0;
 }
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 14a7b9f..6829ff1 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -28,6 +28,12 @@ static inline bool sme_active(void)
 {
return false;
 }
+
+static inline u64 sme_dma_mask(void)
+{
+   return 0ULL;
+}
+
 #endif

 #endif /* CONFIG_AMD_MEM_ENCRYPT */
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index c463067..aff9353 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -509,6 +509,9 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
if (no_iotlb_memory)
panic("Can not allocate SWIOTLB buffer earlier and can't now provide 
you with the DMA bounce buffer");

+   WARN_ONCE(sme_active(),
+ "SME is active and system is using DMA bounce buffers\n");


How does that help?

As in what can the user do with this?


It's meant just to notify the user about the condition. The user could
then decide to use an alternative device that supports a greater DMA
range (I can probably change it to a dev_warn_once() so that a device
is identified).  I would be nice if I could issue this message once per
device that experienced this.  I didn't see anything that would do
that, though.

Thanks,
Tom


+
mask = dma_get_seg_boundary(hwdev);

tbl_dma_addr &= mask;


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 26/28] x86: Allow kexec to be used with SME

2017-02-17 Thread Tom Lendacky

On 2/17/2017 9:57 AM, Konrad Rzeszutek Wilk wrote:

On Thu, Feb 16, 2017 at 09:47:55AM -0600, Tom Lendacky wrote:

Provide support so that kexec can be used to boot a kernel when SME is
enabled.


Is the point of kexec and kdump to ehh, dump memory ? But if the
rest of the memory is encrypted you won't get much, will you?


Kexec can be used to reboot a system without going back through BIOS.
So you can use kexec without using kdump.

For kdump, just taking a quick look, the option to enable memory
encryption can be provided on the crash kernel command line and then
crash kernel can would be able to copy the memory decrypted if the
pagetable is set up properly. It looks like currently ioremap_cache()
is used to map the old memory page.  That might be able to be changed
to a memremap() so that the encryption bit is set in the mapping. That
will mean that memory that is not marked encrypted (EFI tables, swiotlb
memory, etc) would not be read correctly.



Would it make sense to include some printk to the user if they
are setting up kdump that they won't get anything out of it?


Probably a good idea to add something like that.

Thanks,
Tom



Thanks.


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 19/28] swiotlb: Add warnings for use of bounce buffers with SME

2017-02-17 Thread Konrad Rzeszutek Wilk
On Thu, Feb 16, 2017 at 09:46:19AM -0600, Tom Lendacky wrote:
> Add warnings to let the user know when bounce buffers are being used for
> DMA when SME is active.  Since the bounce buffers are not in encrypted
> memory, these notifications are to allow the user to determine some
> appropriate action - if necessary.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/include/asm/mem_encrypt.h |   11 +++
>  include/linux/dma-mapping.h|   11 +++
>  include/linux/mem_encrypt.h|6 ++
>  lib/swiotlb.c  |3 +++
>  4 files changed, 31 insertions(+)
> 
> diff --git a/arch/x86/include/asm/mem_encrypt.h 
> b/arch/x86/include/asm/mem_encrypt.h
> index 87e816f..5a17f1b 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -26,6 +26,11 @@ static inline bool sme_active(void)
>   return (sme_me_mask) ? true : false;
>  }
>  
> +static inline u64 sme_dma_mask(void)
> +{
> + return ((u64)sme_me_mask << 1) - 1;
> +}
> +
>  void __init sme_early_encrypt(resource_size_t paddr,
> unsigned long size);
>  void __init sme_early_decrypt(resource_size_t paddr,
> @@ -53,6 +58,12 @@ static inline bool sme_active(void)
>  {
>   return false;
>  }
> +
> +static inline u64 sme_dma_mask(void)
> +{
> + return 0ULL;
> +}
> +
>  #endif
>  
>  static inline void __init sme_early_encrypt(resource_size_t paddr,
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 10c5a17..130bef7 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -10,6 +10,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  /**
>   * List of possible attributes associated with a DMA mapping. The semantics
> @@ -557,6 +558,11 @@ static inline int dma_set_mask(struct device *dev, u64 
> mask)
>  
>   if (!dev->dma_mask || !dma_supported(dev, mask))
>   return -EIO;
> +
> + if (sme_active() && (mask < sme_dma_mask()))
> + dev_warn(dev,
> +  "SME is active, device will require DMA bounce 
> buffers\n");

You can make it one line. But I am wondering if you should use
printk_ratelimit as this may fill the console up.

> +
>   *dev->dma_mask = mask;
>   return 0;
>  }
> @@ -576,6 +582,11 @@ static inline int dma_set_coherent_mask(struct device 
> *dev, u64 mask)
>  {
>   if (!dma_supported(dev, mask))
>   return -EIO;
> +
> + if (sme_active() && (mask < sme_dma_mask()))
> + dev_warn(dev,
> +  "SME is active, device will require DMA bounce 
> buffers\n");

Ditto.
> +
>   dev->coherent_dma_mask = mask;
>   return 0;
>  }
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index 14a7b9f..6829ff1 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -28,6 +28,12 @@ static inline bool sme_active(void)
>  {
>   return false;
>  }
> +
> +static inline u64 sme_dma_mask(void)
> +{
> + return 0ULL;
> +}
> +
>  #endif
>  
>  #endif   /* CONFIG_AMD_MEM_ENCRYPT */
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> index c463067..aff9353 100644
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -509,6 +509,9 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
>   if (no_iotlb_memory)
>   panic("Can not allocate SWIOTLB buffer earlier and can't now 
> provide you with the DMA bounce buffer");
>  
> + WARN_ONCE(sme_active(),
> +   "SME is active and system is using DMA bounce buffers\n");

How does that help?

As in what can the user do with this?
> +
>   mask = dma_get_seg_boundary(hwdev);
>  
>   tbl_dma_addr &= mask;
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 26/28] x86: Allow kexec to be used with SME

2017-02-17 Thread Konrad Rzeszutek Wilk
On Thu, Feb 16, 2017 at 09:47:55AM -0600, Tom Lendacky wrote:
> Provide support so that kexec can be used to boot a kernel when SME is
> enabled.

Is the point of kexec and kdump to ehh, dump memory ? But if the
rest of the memory is encrypted you won't get much, will you?

Would it make sense to include some printk to the user if they
are setting up kdump that they won't get anything out of it?

Thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 02/28] x86: Set the write-protect cache mode for full PAT support

2017-02-17 Thread Tom Lendacky

On 2/17/2017 5:07 AM, Borislav Petkov wrote:

On Thu, Feb 16, 2017 at 09:42:25AM -0600, Tom Lendacky wrote:

For processors that support PAT, set the write-protect cache mode
(_PAGE_CACHE_MODE_WP) entry to the actual write-protect value (x05).

Acked-by: Borislav Petkov 
Signed-off-by: Tom Lendacky 


Just a nit:

Subject should have "x86/mm/pat: " prefix but that can be fixed when
applying.


I'll go through the series and verify/fix the prefix for each patch.

Thanks,
Tom




___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4] arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU

2017-02-17 Thread Robin Murphy
On 07/02/17 15:38, Geert Uytterhoeven wrote:
> Add support for allocating physically contiguous DMA buffers on arm64
> systems with an IOMMU.  This can be useful when two or more devices
> with different memory requirements are involved in buffer sharing.
> 
> Note that as this uses the CMA allocator, setting the
> DMA_ATTR_FORCE_CONTIGUOUS attribute has a runtime-dependency on
> CONFIG_DMA_CMA, just like on arm32.
> 
> For arm64 systems using swiotlb, no changes are needed to support the
> allocation of physically contiguous DMA buffers:
>   - swiotlb always uses physically contiguous buffers (up to
> IO_TLB_SEGSIZE = 128 pages),
>   - arm64's __dma_alloc_coherent() already calls
> dma_alloc_from_contiguous() when CMA is available.

I think this looks about as good as it ever could now :)

Reviewed-by: Robin Murphy 

Thanks,
Robin.

> Signed-off-by: Geert Uytterhoeven 
> Acked-by: Laurent Pinchart 
> ---
> v4:
>   - Replace dma_to_phys()/phys_to_page() by vmalloc_to_page(), to pass
> the correct page pointer to dma_release_from_contiguous().
> Note that the latter doesn't scream when passed a wrong pointer, but
> just returns false.  While this makes life easier for callers who
> may want to call another deallocator, it makes it harder catching
> bugs.
> 
> v3:
>   - Add Acked-by,
>   - Update comment to "one of _4_ things",
>   - Call dma_alloc_from_contiguous() and iommu_dma_map_page() directly,
> as suggested by Robin Murphy,
> 
> v2:
>   - New, handle dispatching in the arch (arm64) code, as requested by
> Robin Murphy.
> ---
>  arch/arm64/mm/dma-mapping.c | 63 
> ++---
>  1 file changed, 48 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
> index 351f7595cb3ebdb9..fb76e82c90edd514 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -584,20 +584,7 @@ static void *__iommu_alloc_attrs(struct device *dev, 
> size_t size,
>*/
>   gfp |= __GFP_ZERO;
>  
> - if (gfpflags_allow_blocking(gfp)) {
> - struct page **pages;
> - pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent);
> -
> - pages = iommu_dma_alloc(dev, iosize, gfp, attrs, ioprot,
> - handle, flush_page);
> - if (!pages)
> - return NULL;
> -
> - addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot,
> -   __builtin_return_address(0));
> - if (!addr)
> - iommu_dma_free(dev, pages, iosize, handle);
> - } else {
> + if (!gfpflags_allow_blocking(gfp)) {
>   struct page *page;
>   /*
>* In atomic context we can't remap anything, so we'll only
> @@ -621,6 +608,45 @@ static void *__iommu_alloc_attrs(struct device *dev, 
> size_t size,
>   __free_from_pool(addr, size);
>   addr = NULL;
>   }
> + } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) {
> + pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent);
> + struct page *page;
> +
> + page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
> +  get_order(size));
> + if (!page)
> + return NULL;
> +
> + *handle = iommu_dma_map_page(dev, page, 0, iosize, ioprot);
> + if (iommu_dma_mapping_error(dev, *handle)) {
> + dma_release_from_contiguous(dev, page,
> + size >> PAGE_SHIFT);
> + return NULL;
> + }
> + if (!coherent)
> + __dma_flush_area(page_to_virt(page), iosize);
> +
> + addr = dma_common_contiguous_remap(page, size, VM_USERMAP,
> +prot,
> +__builtin_return_address(0));
> + if (!addr) {
> + iommu_dma_unmap_page(dev, *handle, iosize, 0, attrs);
> + dma_release_from_contiguous(dev, page,
> + size >> PAGE_SHIFT);
> + }
> + } else {
> + pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent);
> + struct page **pages;
> +
> + pages = iommu_dma_alloc(dev, iosize, gfp, attrs, ioprot,
> + handle, flush_page);
> + if (!pages)
> + return NULL;
> +
> + addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot,
> +   __builtin_return_address(0));
> + if (!addr)
> + iommu_dma_free(dev, pages, iosize, handle);
>   }
>   return addr;
>  }
> 

Re: [RFC PATCH v4 05/28] x86: Add Secure Memory Encryption (SME) support

2017-02-17 Thread Borislav Petkov
On Thu, Feb 16, 2017 at 09:43:07AM -0600, Tom Lendacky wrote:
> Add support for Secure Memory Encryption (SME). This initial support
> provides a Kconfig entry to build the SME support into the kernel and
> defines the memory encryption mask that will be used in subsequent
> patches to mark pages as encrypted.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/Kconfig   |   22 +++
>  arch/x86/include/asm/mem_encrypt.h |   42 
> 
>  arch/x86/mm/Makefile   |1 +
>  arch/x86/mm/mem_encrypt.c  |   21 ++
>  include/linux/mem_encrypt.h|   37 
>  5 files changed, 123 insertions(+)
>  create mode 100644 arch/x86/include/asm/mem_encrypt.h
>  create mode 100644 arch/x86/mm/mem_encrypt.c
>  create mode 100644 include/linux/mem_encrypt.h
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index f8fbfc5..a3b8c71 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1395,6 +1395,28 @@ config X86_DIRECT_GBPAGES
> supports them), so don't confuse the user by printing
> that we have them enabled.
>  
> +config AMD_MEM_ENCRYPT
> + bool "AMD Secure Memory Encryption (SME) support"
> + depends on X86_64 && CPU_SUP_AMD
> + ---help---
> +   Say yes to enable support for the encryption of system memory.
> +   This requires an AMD processor that supports Secure Memory
> +   Encryption (SME).
> +
> +config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
> + bool "Activate AMD Secure Memory Encryption (SME) by default"
> + default y
> + depends on AMD_MEM_ENCRYPT
> + ---help---
> +   Say yes to have system memory encrypted by default if running on
> +   an AMD processor that supports Secure Memory Encryption (SME).
> +
> +   If set to Y, then the encryption of system memory can be
> +   deactivated with the mem_encrypt=off command line option.
> +
> +   If set to N, then the encryption of system memory can be
> +   activated with the mem_encrypt=on command line option.

Good.

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 02/28] x86: Set the write-protect cache mode for full PAT support

2017-02-17 Thread Borislav Petkov
On Thu, Feb 16, 2017 at 09:42:25AM -0600, Tom Lendacky wrote:
> For processors that support PAT, set the write-protect cache mode
> (_PAGE_CACHE_MODE_WP) entry to the actual write-protect value (x05).
> 
> Acked-by: Borislav Petkov 
> Signed-off-by: Tom Lendacky 

Just a nit:

Subject should have "x86/mm/pat: " prefix but that can be fixed when
applying.

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 04/28] x86: Handle reduction in physical address size with SME

2017-02-17 Thread Borislav Petkov
On Thu, Feb 16, 2017 at 09:42:54AM -0600, Tom Lendacky wrote:
> When System Memory Encryption (SME) is enabled, the physical address
> space is reduced. Adjust the x86_phys_bits value to reflect this
> reduction.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/kernel/cpu/common.c |   10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index b33bc06..358208d7 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -771,11 +771,15 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
>   u64 msr;
>  
>   /*
> -  * For SME, BIOS support is required. If BIOS has not
> -  * enabled SME don't advertise the feature.
> +  * For SME, BIOS support is required. If BIOS has
> +  * enabled SME adjust x86_phys_bits by the SME
> +  * physical address space reduction value. If BIOS
> +  * has not enabled SME don't advertise the feature.
>*/
>   rdmsrl(MSR_K8_SYSCFG, msr);
> - if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
> + if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT)
> + c->x86_phys_bits -= (ebx >> 6) & 0x3f;
> + else
>   eax &= ~0x01;

Right, as I mentioned yesterday, this should go to arch/x86/kernel/cpu/amd.c

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu