Re: [PATCH v2 2/4] efi/arm: map UEFI memory map even w/o runtime services enabled

2018-06-28 Thread James Morse
Hi Akashi,

On 19/06/18 07:44, AKASHI Takahiro wrote:
> Under the current implementation, UEFI memory map will be mapped and made
> available in virtual mappings only if runtime services are enabled.

> But in a later patch, we want to use UEFI memory map in acpi_os_ioremap()
> to create mappings of ACPI tables using memory attributes described in
> UEFI memory map.
> 
> So, as a first step, arm_enter_runtime_services() will be modified
> so that UEFI memory map will be always accessible.
> 
> See a relevant commit:
> arm64: acpi: fix alignment fault in accessing ACPI tables

For what its worth:
Acked-by: James Morse 


Thanks,

James

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v2 4/4] arm64: acpi: fix alignment fault in accessing ACPI

2018-06-28 Thread James Morse
Hi Akashi,

On 19/06/18 07:44, AKASHI Takahiro wrote:
> This is a fix against the issue that crash dump kernel may hang up
> during booting, which can happen on any ACPI-based system with "ACPI
> Reclaim Memory."
> 
> (kernel messages after panic kicked off kdump)
>  (snip...)
>   Bye!
>  (snip...)
>   ACPI: Core revision 20170728
>   pud=2e7d0003, *pmd=2e7c0003, *pte=00e839710707
>   Internal error: Oops: 9621 [#1] SMP
>   Modules linked in:
>   CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.0-rc6 #1
>   task: 08d05180 task.stack: 08cc
>   PC is at acpi_ns_lookup+0x25c/0x3c0
>   LR is at acpi_ds_load1_begin_op+0xa4/0x294
>  (snip...)
>   Process swapper/0 (pid: 0, stack limit = 0x08cc)
>   Call trace:
>  (snip...)
>   [] acpi_ns_lookup+0x25c/0x3c0
>   [] acpi_ds_load1_begin_op+0xa4/0x294
>   [] acpi_ps_build_named_op+0xc4/0x198
>   [] acpi_ps_create_op+0x14c/0x270
>   [] acpi_ps_parse_loop+0x188/0x5c8
>   [] acpi_ps_parse_aml+0xb0/0x2b8
>   [] acpi_ns_one_complete_parse+0x144/0x184
>   [] acpi_ns_parse_table+0x48/0x68
>   [] acpi_ns_load_table+0x4c/0xdc
>   [] acpi_tb_load_namespace+0xe4/0x264
>   [] acpi_load_tables+0x48/0xc0
>   [] acpi_early_init+0x9c/0xd0
>   [] start_kernel+0x3b4/0x43c
>   Code: b9008fb9 2a000318 36380054 32190318 (b94002c0)
>   ---[ end trace c46ed37f9651c58e ]---
>   Kernel panic - not syncing: Fatal exception
>   Rebooting in 10 seconds..
> 
> (diagnosis)
> * This fault is a data abort, alignment fault (ESR=0x9621)
>   during reading out ACPI table.
> * Initial ACPI tables are normally stored in system ram and marked as
>   "ACPI Reclaim memory" by the firmware.
> * After the commit f56ab9a5b73c ("efi/arm: Don't mark ACPI reclaim
>   memory as MEMBLOCK_NOMAP"), those regions are differently handled
>   as they are "memblock-reserved", without NOMAP bit.
> * So they are now excluded from device tree's "usable-memory-range"
>   which kexec-tools determines based on a current view of /proc/iomem.
> * When crash dump kernel boots up, it tries to accesses ACPI tables by
>   mapping them with ioremap(), not ioremap_cache(), in acpi_os_ioremap()
>   since they are no longer part of mapped system ram.
> * Given that ACPI accessor/helper functions are compiled in without
>   unaligned access support (ACPI_MISALIGNMENT_NOT_SUPPORTED),
>   any unaligned access to ACPI tables can cause a fatal panic.
> 
> With this patch, acpi_os_ioremap() always honors memory attribute
> information provided by the firmware (EFI) and retaining cacheability
> allows the kernel safe access to ACPI tables.

Reviewed-by: James Morse 


Thanks,

James

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 5/5 V4] Help to dump the old memory encrypted into vmcore file

2018-06-28 Thread kbuild test robot
Hi Lianbo,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.18-rc2 next-20180628]
[cannot apply to tip/x86/core]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Lianbo-Jiang/Add-a-function-ioremap_encrypted-for-kdump-when-AMD-sme-enabled/20180628-173357
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.2.0 make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   fs/proc/vmcore.c: In function 'elfcorehdr_read':
>> fs/proc/vmcore.c:180:9: error: implicit declaration of function 'memremap'; 
>> did you mean 'memset_p'? [-Werror=implicit-function-declaration]
 kbuf = memremap(offset, count, MEMREMAP_WB);
^~~~
memset_p
   fs/proc/vmcore.c:180:33: error: 'MEMREMAP_WB' undeclared (first use in this 
function)
 kbuf = memremap(offset, count, MEMREMAP_WB);
^~~
   fs/proc/vmcore.c:180:33: note: each undeclared identifier is reported only 
once for each function it appears in
   fs/proc/vmcore.c:185:2: error: implicit declaration of function 'memunmap'; 
did you mean 'vm_munmap'? [-Werror=implicit-function-declaration]
 memunmap(kbuf);
 ^~~~
 vm_munmap
   cc1: some warnings being treated as errors

vim +180 fs/proc/vmcore.c

   158  
   159  /*
   160   * Architectures may override this function to read from ELF header.
   161   * The kexec-tools will allocated the memory and build the elf header
   162   * in the first kernel, subsequently, we will copy the data in the
   163   * memory to the reserved crash memory. In kdump mode, we will read the
   164   * elf header from the reserved crash memory, from this point of view,
   165   * which is not an old memory, the original function called may mislead
   166   * and do unnecessary things.
   167   * For SME, it copies the elf header from the memory encrypted(user 
space)
   168   * to the memory unencrypted(kernel space) when SME is activated in the
   169   * first kernel, this operation just leads to decryption.
   170   */
   171  ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
   172  {
   173  char *kbuf;
   174  resource_size_t offset;
   175  
   176  if (!count)
   177  return 0;
   178  
   179  offset = (resource_size_t)*ppos;
 > 180  kbuf = memremap(offset, count, MEMREMAP_WB);
   181  if (!kbuf)
   182  return 0;
   183  
   184  memcpy(buf, kbuf, count);
   185  memunmap(kbuf);
   186  
   187  return count;
   188  }
   189  

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip
___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 5/5 V4] Help to dump the old memory encrypted into vmcore file

2018-06-28 Thread kbuild test robot
Hi Lianbo,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.18-rc2 next-20180628]
[cannot apply to tip/x86/core]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Lianbo-Jiang/Add-a-function-ioremap_encrypted-for-kdump-when-AMD-sme-enabled/20180628-173357
config: s390-allyesconfig (attached as .config)
compiler: s390x-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.2.0 make.cross ARCH=s390 

All errors (new ones prefixed by >>):

   fs/proc/vmcore.c: In function 'elfcorehdr_read':
>> fs/proc/vmcore.c:180:9: error: implicit declaration of function 'memremap'; 
>> did you mean 'ioremap'? [-Werror=implicit-function-declaration]
 kbuf = memremap(offset, count, MEMREMAP_WB);
^~~~
ioremap
>> fs/proc/vmcore.c:180:33: error: 'MEMREMAP_WB' undeclared (first use in this 
>> function)
 kbuf = memremap(offset, count, MEMREMAP_WB);
^~~
   fs/proc/vmcore.c:180:33: note: each undeclared identifier is reported only 
once for each function it appears in
>> fs/proc/vmcore.c:185:2: error: implicit declaration of function 'memunmap'; 
>> did you mean 'vm_munmap'? [-Werror=implicit-function-declaration]
 memunmap(kbuf);
 ^~~~
 vm_munmap
   cc1: some warnings being treated as errors

vim +180 fs/proc/vmcore.c

   158  
   159  /*
   160   * Architectures may override this function to read from ELF header.
   161   * The kexec-tools will allocated the memory and build the elf header
   162   * in the first kernel, subsequently, we will copy the data in the
   163   * memory to the reserved crash memory. In kdump mode, we will read the
   164   * elf header from the reserved crash memory, from this point of view,
   165   * which is not an old memory, the original function called may mislead
   166   * and do unnecessary things.
   167   * For SME, it copies the elf header from the memory encrypted(user 
space)
   168   * to the memory unencrypted(kernel space) when SME is activated in the
   169   * first kernel, this operation just leads to decryption.
   170   */
   171  ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
   172  {
   173  char *kbuf;
   174  resource_size_t offset;
   175  
   176  if (!count)
   177  return 0;
   178  
   179  offset = (resource_size_t)*ppos;
 > 180  kbuf = memremap(offset, count, MEMREMAP_WB);
   181  if (!kbuf)
   182  return 0;
   183  
   184  memcpy(buf, kbuf, count);
 > 185  memunmap(kbuf);
   186  
   187  return count;
   188  }
   189  

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip
___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 5/5 V4] Help to dump the old memory encrypted into vmcore file

2018-06-28 Thread Lianbo Jiang
In kdump mode, we need to dump the old memory into vmcore file,
if SME is enabled in the first kernel, we must remap the old
memory in encrypted manner, which will be automatically decrypted
when we read from DRAM. It helps to parse the vmcore for some tools.

Signed-off-by: Lianbo Jiang 
---
Some changes:
1. add a new file and modify Makefile.
2. revert some code about previously using sev_active().
3. modify elfcorehdr_read().

 arch/x86/kernel/Makefile |  1 +
 arch/x86/kernel/crash_dump_encrypt.c | 53 
 fs/proc/vmcore.c | 45 +-
 include/linux/crash_dump.h   | 12 
 4 files changed, 104 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/kernel/crash_dump_encrypt.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 02d6f5c..afb5bad 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -96,6 +96,7 @@ obj-$(CONFIG_KEXEC_CORE)  += machine_kexec_$(BITS).o
 obj-$(CONFIG_KEXEC_CORE)   += relocate_kernel_$(BITS).o crash.o
 obj-$(CONFIG_KEXEC_FILE)   += kexec-bzimage64.o
 obj-$(CONFIG_CRASH_DUMP)   += crash_dump_$(BITS).o
+obj-$(CONFIG_AMD_MEM_ENCRYPT)  += crash_dump_encrypt.o
 obj-y  += kprobes/
 obj-$(CONFIG_MODULES)  += module.o
 obj-$(CONFIG_DOUBLEFAULT)  += doublefault.o
diff --git a/arch/x86/kernel/crash_dump_encrypt.c 
b/arch/x86/kernel/crash_dump_encrypt.c
new file mode 100644
index 000..e1b1a57
--- /dev/null
+++ b/arch/x86/kernel/crash_dump_encrypt.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Memory preserving reboot related code.
+ *
+ * Created by: Lianbo Jiang (liji...@redhat.com)
+ * Copyright (C) RedHat Corporation, 2018. All rights reserved
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+/**
+ * copy_oldmem_page_encrypted - copy one page from "oldmem encrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ * space or user address space (see @userbuf)
+ * @csize: number of bytes to copy
+ * @offset: offset in bytes into the page (based on pfn) to begin the copy
+ * @userbuf: if set, @buf is in user address space, use copy_to_user(),
+ * otherwise @buf is in kernel address space, use memcpy().
+ *
+ * Copy a page from "oldmem encrypted". For this page, there is no pte
+ * mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
+ */
+
+ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
+   size_t csize, unsigned long offset, int userbuf)
+{
+   void  *vaddr;
+
+   if (!csize)
+   return 0;
+
+   vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT,
+ PAGE_SIZE);
+   if (!vaddr)
+   return -ENOMEM;
+
+   if (userbuf) {
+   if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+   iounmap((void __iomem *)vaddr);
+   return -EFAULT;
+   }
+   } else
+   memcpy(buf, vaddr + offset, csize);
+
+   set_iounmap_nonlazy();
+   iounmap((void __iomem *)vaddr);
+   return csize;
+}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index cfb6674..5fef489 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -26,6 +26,8 @@
 #include 
 #include 
 #include "internal.h"
+#include 
+#include 
 
 /* List representing chunks of contiguous memory areas and their offsets in
  * vmcore file.
@@ -98,7 +100,8 @@ static int pfn_is_ram(unsigned long pfn)
 
 /* Reads a page from the oldmem device from given offset. */
 static ssize_t read_from_oldmem(char *buf, size_t count,
-   u64 *ppos, int userbuf)
+   u64 *ppos, int userbuf,
+   bool encrypted)
 {
unsigned long pfn, offset;
size_t nr_bytes;
@@ -120,8 +123,11 @@ static ssize_t read_from_oldmem(char *buf, size_t count,
if (pfn_is_ram(pfn) == 0)
memset(buf, 0, nr_bytes);
else {
-   tmp = copy_oldmem_page(pfn, buf, nr_bytes,
-   offset, userbuf);
+   tmp = encrypted ? copy_oldmem_page_encrypted(pfn,
+   buf, nr_bytes, offset, userbuf)
+   : copy_oldmem_page(pfn, buf, nr_bytes,
+  offset, userbuf);
+
if (tmp < 0)
return tmp;
}
@@ -151,11 +157,34 @@ void __weak elfcorehdr_free(unsigned long long addr)
 {}
 
 /*
- * Architectures may override this function to read from ELF header
+ * Architectures may override this function to read from ELF header.
+ * The kexec-tools will al

[PATCH 4/5 V4] Adjust some permanent mappings in unencrypted ways for kdump when SME is enabled.

2018-06-28 Thread Lianbo Jiang
For kdump, the acpi table and dmi table will need to be remapped in
unencrypted ways during early init, they have just a simple wrapper
around early_memremap(), but the early_memremap() remaps the memory
in encrypted ways by default when SME is enabled, so we put some logic
into the early_memremap_pgprot_adjust(), which will have an opportunity
to adjust it.

Signed-off-by: Lianbo Jiang 
---
 arch/x86/mm/ioremap.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index e01e6c6..3c1c8c4 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -689,8 +689,17 @@ pgprot_t __init 
early_memremap_pgprot_adjust(resource_size_t phys_addr,
encrypted_prot = true;
 
if (sme_active()) {
+   /*
+* In kdump mode, the acpi table and dmi table will need to
+* be remapped in unencrypted ways during early init when
+* SME is enabled. They have just a simple wrapper around
+* early_memremap(), but the early_memremap() remaps the
+* memory in encrypted ways by default when SME is enabled,
+* so we must adjust it.
+*/
if (early_memremap_is_setup_data(phys_addr, size) ||
-   memremap_is_efi_data(phys_addr, size))
+   memremap_is_efi_data(phys_addr, size) ||
+   is_kdump_kernel())
encrypted_prot = false;
}
 
-- 
2.9.5


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 0/5 V4] Support kdump for AMD secure memory encryption(SME)

2018-06-28 Thread Lianbo Jiang
When sme enabled on AMD server, we also need to support kdump. Because
the memory is encrypted in the first kernel, we will remap the old memory
encrypted to the second kernel(crash kernel), and sme is also enabled in
the second kernel, otherwise the old memory encrypted can not be decrypted.
Because simply changing the value of a C-bit on a page will not
automatically encrypt the existing contents of a page, and any data in the
page prior to the C-bit modification will become unintelligible. A page of
memory that is marked encrypted will be automatically decrypted when read
from DRAM and will be automatically encrypted when written to DRAM.

For the kdump, it is necessary to distinguish whether the memory is
encrypted. Furthermore, we should also know which part of the memory is
encrypted or decrypted. We will appropriately remap the memory according
to the specific situation in order to tell cpu how to deal with the
data(encrypted or decrypted). For example, when sme enabled, if the old
memory is encrypted, we will remap the old memory in encrypted way, which
will automatically decrypt the old memory encrypted when we read those data
from the remapping address.

 --
| first-kernel | second-kernel | kdump support |
|  (mem_encrypt=on|off)|   (yes|no)|
|--+---+---|
| on   | on| yes   |
| off  | off   | yes   |
| on   | off   | no|
| off  | on| no|
|__|___|___|

This patch is only for SME kdump, it is not support SEV kdump.

For kdump(SME), there are two cases that doesn't support:
1. SME is enabled in the first kernel, but SME is disabled in the
second kernel
Because the old memory is encrypted, we can't decrypt the old memory
if SME is off in the second kernel.

2. SME is disabled in the first kernel, but SME is enabled in the
second kernel
Maybe it is unnecessary to support this case, because the old memory
is unencrypted, the old memory can be dumped as usual, we don't need
to enable sme in the second kernel, furthermore the requirement is
rare in actual deployment. Another, If we must support the scenario,
it will increase the complexity of the code, we will have to consider
how to transfer the sme flag from the first kernel to the second kernel,
in order to let the second kernel know that whether the old memory is
encrypted.
There are two manners to tranfer the SME flag to the second kernel, the
first way is to modify the assembly code, which includes some common
code and the path is too long. The second way is to use kexec tool,
which could require the sme flag to be exported in the first kernel
by "proc" or "sysfs", kexec will read the sme flag from "proc" or
"sysfs" when we use kexec tool to load image, subsequently the sme flag
will be saved in boot_params, we can properly remap the old memory
according to the previously saved sme flag. Although we can fix this
issue, maybe it is too expensive to do this. By the way, we won't fix
the problem unless someone thinks it is necessary to do it.

Test tools:
makedumpfile[v1.6.3]: https://github.com/LianboJ/makedumpfile
commit e1de103eca8f (A draft for kdump vmcore about AMD SME)
Author: Lianbo Jiang 
Date:   Mon May 14 17:02:40 2018 +0800
Note: This patch can only dump vmcore in the case of SME enabled.

crash-7.2.1: https://github.com/crash-utility/crash.git
commit 1e1bd9c4c1be (Fix for the "bpf" command display on Linux 4.17-rc1)
Author: Dave Anderson 
Date:   Fri May 11 15:54:32 2018 -0400

Test environment:
HP ProLiant DL385Gen10 AMD EPYC 7251
8-Core Processor
32768 MB memory
600 GB disk space

Linux 4.18-rc2:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
commit 7daf201d7fe8 ("Linux 4.18-rc2")
Author: Linus Torvalds 
Date:   Sun Jun 24 20:54:29 2018 +0800

Reference:
AMD64 Architecture Programmer's Manual
https://support.amd.com/TechDocs/24593.pdf

Some changes:
1. remove the sme_active() check in __ioremap_caller().
2. remove the '#ifdef' stuff throughout this patch.
3. put some logic into the early_memremap_pgprot_adjust() and clean the
previous unnecessary changes, for example: arch/x86/include/asm/dmi.h,
arch/x86/kernel/acpi/boot.c, drivers/acpi/tables.c.
4. add a new file and modify Makefile.
5. clean compile warning in copy_device_table() and some compile error.
6. split the original patch into five patches, it will be better for
review.
7. modify elfcorehdr_read().
8. add some comments.

Some known issues:
1. about SME
Upstream kernel doesn't work when we use kexec in the follow command. The
system will hang on 'HP ProLiant DL385Gen10 AMD EPYC 7251'. But it can't
reproduce on speedway.
(This issue doesn't matter with the kdump patch.)

Reproduce steps:
 # kexec -l /boot/vmlinuz-4.18.0-rc2+ --initrd=/boot/initramfs-4.18.0-rc2+.img 
--command-line="root=/dev/mapper/rhel_hp--dl385g10--03-root

[PATCH 3/5 V4] Remap the device table of IOMMU in encrypted manner for kdump

2018-06-28 Thread Lianbo Jiang
In kdump mode, it will copy the device table of IOMMU from the old
device table, which is encrypted when SME is enabled in the first
kernel. So we must remap it in encrypted manner in order to be
automatically decrypted when we read.

Signed-off-by: Lianbo Jiang 
---
Some changes:
1. add some comments
2. clean compile warning.
3. remove unnecessary code when we clear sme mask bit.

 drivers/iommu/amd_iommu_init.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 904c575..4cebb00 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -888,12 +888,22 @@ static bool copy_device_table(void)
}
}
 
-   old_devtb_phys = entry & PAGE_MASK;
+   /*
+* When SME is enabled in the first kernel, the entry includes the
+* memory encryption mask(sme_me_mask), we must remove the memory
+* encryption mask to obtain the true physical address in kdump mode.
+*/
+   old_devtb_phys = __sme_clr(entry) & PAGE_MASK;
+
if (old_devtb_phys >= 0x1ULL) {
pr_err("The address of old device table is above 4G, not 
trustworthy!\n");
return false;
}
-   old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+   old_devtb = (sme_active() && is_kdump_kernel())
+   ? (__force void *)ioremap_encrypted(old_devtb_phys,
+   dev_table_size)
+   : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+
if (!old_devtb)
return false;
 
-- 
2.9.5


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 2/5 V4] Allocate pages for kdump without encryption when SME is enabled

2018-06-28 Thread Lianbo Jiang
When SME is enabled in the first kernel, we will allocate pages
for kdump without encryption in order to be able to boot the
second kernel in the same manner as kexec, which helps to keep
the same code style.

Signed-off-by: Lianbo Jiang 
---
Some changes:
1. remove some redundant codes for crash control pages.
2. add some comments.

 kernel/kexec_core.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 23a83a4..e7efcd1 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -471,6 +471,16 @@ static struct page 
*kimage_alloc_crash_control_pages(struct kimage *image,
}
}
 
+   if (pages) {
+   /*
+* For kdump, we need to ensure that these pages are
+* unencrypted pages if SME is enabled.
+* By the way, it is unnecessary to call the arch_
+* kexec_pre_free_pages(), which will make the code
+* become more simple.
+*/
+   arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+   }
return pages;
 }
 
@@ -867,6 +877,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result  = -ENOMEM;
goto out;
}
+   arch_kexec_post_alloc_pages(page_address(page), 1, 0);
ptr = kmap(page);
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
@@ -884,6 +895,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result = copy_from_user(ptr, buf, uchunk);
kexec_flush_icache_page(page);
kunmap(page);
+   arch_kexec_pre_free_pages(page_address(page), 1);
if (result) {
result = -EFAULT;
goto out;
-- 
2.9.5


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 1/5 V4] Add a function(ioremap_encrypted) for kdump when AMD sme enabled

2018-06-28 Thread Lianbo Jiang
It is convenient to remap the old memory encrypted to the second
kernel by calling ioremap_encrypted().

Signed-off-by: Lianbo Jiang 
---
Some changes:
1. remove the sme_active() check in __ioremap_caller().
2. revert some logic in the early_memremap_pgprot_adjust() for
early memremap and make it separate a new patch.

 arch/x86/include/asm/io.h |  3 +++
 arch/x86/mm/ioremap.c | 25 +
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 6de6484..f8795f9 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -192,6 +192,9 @@ extern void __iomem *ioremap_cache(resource_size_t offset, 
unsigned long size);
 #define ioremap_cache ioremap_cache
 extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, 
unsigned long prot_val);
 #define ioremap_prot ioremap_prot
+extern void __iomem *ioremap_encrypted(resource_size_t phys_addr,
+   unsigned long size);
+#define ioremap_encrypted ioremap_encrypted
 
 /**
  * ioremap -   map bus memory into CPU space
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c63a545..e01e6c6 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "physaddr.h"
 
@@ -131,7 +132,8 @@ static void __ioremap_check_mem(resource_size_t addr, 
unsigned long size,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-   unsigned long size, enum page_cache_mode pcm, void *caller)
+   unsigned long size, enum page_cache_mode pcm,
+   void *caller, bool encrypted)
 {
unsigned long offset, vaddr;
resource_size_t last_addr;
@@ -199,7 +201,7 @@ static void __iomem *__ioremap_caller(resource_size_t 
phys_addr,
 * resulting mapping.
 */
prot = PAGE_KERNEL_IO;
-   if (sev_active() && mem_flags.desc_other)
+   if ((sev_active() && mem_flags.desc_other) || encrypted)
prot = pgprot_encrypted(prot);
 
switch (pcm) {
@@ -291,7 +293,7 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, 
unsigned long size)
enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
return __ioremap_caller(phys_addr, size, pcm,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_nocache);
 
@@ -324,7 +326,7 @@ void __iomem *ioremap_uc(resource_size_t phys_addr, 
unsigned long size)
enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC;
 
return __ioremap_caller(phys_addr, size, pcm,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL_GPL(ioremap_uc);
 
@@ -341,7 +343,7 @@ EXPORT_SYMBOL_GPL(ioremap_uc);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wc);
 
@@ -358,14 +360,21 @@ EXPORT_SYMBOL(ioremap_wc);
 void __iomem *ioremap_wt(resource_size_t phys_addr, unsigned long size)
 {
return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WT,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wt);
 
+void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size)
+{
+   return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
+   __builtin_return_address(0), true);
+}
+EXPORT_SYMBOL(ioremap_encrypted);
+
 void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
 {
return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_cache);
 
@@ -374,7 +383,7 @@ void __iomem *ioremap_prot(resource_size_t phys_addr, 
unsigned long size,
 {
return __ioremap_caller(phys_addr, size,
pgprot2cachemode(__pgprot(prot_val)),
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_prot);
 
-- 
2.9.5


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec