Re: [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Tom Lendacky

On 6/9/2017 1:46 PM, Andy Lutomirski wrote:

On Thu, Jun 8, 2017 at 3:38 PM, Tom Lendacky  wrote:

On 6/8/2017 1:05 AM, Andy Lutomirski wrote:


On Wed, Jun 7, 2017 at 12:14 PM, Tom Lendacky 
wrote:


The cr3 register entry can contain the SME encryption bit that indicates
the PGD is encrypted.  The encryption bit should not be used when
creating
a virtual address for the PGD table.

Create a new function, read_cr3_pa(), that will extract the physical
address from the cr3 register. This function is then used where a virtual
address of the PGD needs to be created/used from the cr3 register.



This is going to conflict with:


https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/pcid=555c81e5d01a62b629ec426a2f50d27e2127c1df

We're both encountering the fact that CR3 munges the page table PA
with some other stuff, and some readers want to see the actual CR3
value and other readers just want the PA.  The thing I prefer about my
patch is that I get rid of read_cr3() entirely, forcing the patch to
update every single reader, making review and conflict resolution much
safer.

I'd be willing to send a patch tomorrow that just does the split into
__read_cr3() and read_cr3_pa() (I like your name better) and then we
can both base on top of it.  Would that make sense?



That makes sense to me.


Draft patch:

https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/read_cr3=9adebbc1071f066421a27b4f6e040190f1049624


Looks good to me. I'll look at how to best mask off the encryption bit
in CR3_ADDR_MASK for SME support.  I should be able to just do an
__sme_clr() against it.







Also:


+static inline unsigned long read_cr3_pa(void)
+{
+   return (read_cr3() & PHYSICAL_PAGE_MASK);
+}



Is there any guarantee that the magic encryption bit is masked out in
PHYSICAL_PAGE_MASK?  The docs make it sound like it could be any bit.
(But if it's one of the low 12 bits, that would be quite confusing.)



Right now it's bit 47 and we're steering away from any of the currently
reserved bits so we should be safe.


Should the SME init code check that it's a usable bit (i.e. outside
our physical address mask and not one of the bottom twelve bits)?  If
some future CPU daftly picks, say, bit 12, we'll regret it if we
enable SME.


I think I can safely say that it will never be any of the lower 12 bits,
but let me talk to some of the hardware folks and see about the other
end of the range.

Thanks,
Tom



--Andy



___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [Xen-devel] [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Boris Ostrovsky

>>
>> PV guests don't go through Linux x86 early boot code. They start at
>> xen_start_kernel() (well, xen-head.S:startup_xen(), really) and  merge
>> with baremetal path at x86_64_start_reservations() (for 64-bit).
>>
>
> Ok, I don't think anything needs to be done then. The sme_me_mask is set
> in sme_enable() which is only called from head_64.S. If the sme_me_mask
> isn't set then SME won't be active. The feature will just report the
> capability of the processor, but that doesn't mean it is active. If you
> still want the feature to be clobbered we can do that, though.

I'd prefer to explicitly clear to avoid any ambiguity.

-boris

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [Xen-devel] [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Tom Lendacky

On 6/9/2017 1:43 PM, Boris Ostrovsky wrote:

On 06/09/2017 02:36 PM, Tom Lendacky wrote:

On 6/8/2017 5:01 PM, Andrew Cooper wrote:

On 08/06/2017 22:17, Boris Ostrovsky wrote:

On 06/08/2017 05:02 PM, Tom Lendacky wrote:

On 6/8/2017 3:51 PM, Boris Ostrovsky wrote:

What may be needed is making sure X86_FEATURE_SME is not set for PV
guests.

And that may be something that Xen will need to control through
either
CPUID or MSR support for the PV guests.


Only on newer versions of Xen. On earlier versions (2-3 years old)
leaf
0x8007 is passed to the guest unchanged. And so is MSR_K8_SYSCFG.

The SME feature is in leaf 0x801f, is that leaf passed to the
guest
unchanged?

Oh, I misread the patch where X86_FEATURE_SME is defined. Then all
versions, including the current one, pass it unchanged.

All that's needed is setup_clear_cpu_cap(X86_FEATURE_SME) in
xen_init_capabilities().


AMD processors still don't support CPUID Faulting (or at least, I
couldn't find any reference to it in the latest docs), so we cannot
actually hide SME from a guest which goes looking at native CPUID.
Furthermore, I'm not aware of any CPUID masking support covering that
leaf.

However, if Linux is using the paravirtual cpuid hook, things are
slightly better.

On Xen 4.9 and later, no guests will see the feature.  On earlier
versions of Xen (before I fixed the logic), plain domUs will not see the
feature, while dom0 will.

For safely, I'd recommend unilaterally clobbering the feature as Boris
suggested.  There is no way SME will be supportable on a per-PV guest


That may be too late. Early boot support in head_64.S will make calls to
check for the feature (through CPUID and MSR), set the sme_me_mask and
encrypt the kernel in place. Is there another way to approach this?



PV guests don't go through Linux x86 early boot code. They start at
xen_start_kernel() (well, xen-head.S:startup_xen(), really) and  merge
with baremetal path at x86_64_start_reservations() (for 64-bit).



Ok, I don't think anything needs to be done then. The sme_me_mask is set
in sme_enable() which is only called from head_64.S. If the sme_me_mask
isn't set then SME won't be active. The feature will just report the
capability of the processor, but that doesn't mean it is active. If you
still want the feature to be clobbered we can do that, though.

Thanks,
Tom



-boris




basis, although (as far as I am aware) Xen as a whole would be able to
encompass itself and all of its PV guests inside one single SME
instance.


Yes, that is correct.

Thanks,
Tom



~Andrew





___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [Xen-devel] [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Andrew Cooper
On 09/06/17 19:43, Boris Ostrovsky wrote:
> On 06/09/2017 02:36 PM, Tom Lendacky wrote:
>>> basis, although (as far as I am aware) Xen as a whole would be able to
>>> encompass itself and all of its PV guests inside one single SME
>>> instance.
>> Yes, that is correct.

Thinking more about this, it would only be possible if all the PV guests
were SME-aware and understood not to choke when it finds a frame with a
high address bit set.

I expect the only viable way to implement this (should we wish) is to
have PV guests explicitly signal support (probably via an ELF note),
after which it needs to know about the existence of SME, the meaning of
the encrypted bit in PTEs, and to defer all configuration responsibility
to Xen.

~Andrew

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Andy Lutomirski
On Thu, Jun 8, 2017 at 3:38 PM, Tom Lendacky  wrote:
> On 6/8/2017 1:05 AM, Andy Lutomirski wrote:
>>
>> On Wed, Jun 7, 2017 at 12:14 PM, Tom Lendacky 
>> wrote:
>>>
>>> The cr3 register entry can contain the SME encryption bit that indicates
>>> the PGD is encrypted.  The encryption bit should not be used when
>>> creating
>>> a virtual address for the PGD table.
>>>
>>> Create a new function, read_cr3_pa(), that will extract the physical
>>> address from the cr3 register. This function is then used where a virtual
>>> address of the PGD needs to be created/used from the cr3 register.
>>
>>
>> This is going to conflict with:
>>
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/pcid=555c81e5d01a62b629ec426a2f50d27e2127c1df
>>
>> We're both encountering the fact that CR3 munges the page table PA
>> with some other stuff, and some readers want to see the actual CR3
>> value and other readers just want the PA.  The thing I prefer about my
>> patch is that I get rid of read_cr3() entirely, forcing the patch to
>> update every single reader, making review and conflict resolution much
>> safer.
>>
>> I'd be willing to send a patch tomorrow that just does the split into
>> __read_cr3() and read_cr3_pa() (I like your name better) and then we
>> can both base on top of it.  Would that make sense?
>
>
> That makes sense to me.

Draft patch:

https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/read_cr3=9adebbc1071f066421a27b4f6e040190f1049624

>
>>
>> Also:
>>
>>> +static inline unsigned long read_cr3_pa(void)
>>> +{
>>> +   return (read_cr3() & PHYSICAL_PAGE_MASK);
>>> +}
>>
>>
>> Is there any guarantee that the magic encryption bit is masked out in
>> PHYSICAL_PAGE_MASK?  The docs make it sound like it could be any bit.
>> (But if it's one of the low 12 bits, that would be quite confusing.)
>
>
> Right now it's bit 47 and we're steering away from any of the currently
> reserved bits so we should be safe.

Should the SME init code check that it's a usable bit (i.e. outside
our physical address mask and not one of the bottom twelve bits)?  If
some future CPU daftly picks, say, bit 12, we'll regret it if we
enable SME.

--Andy

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [Xen-devel] [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Boris Ostrovsky
On 06/09/2017 02:36 PM, Tom Lendacky wrote:
> On 6/8/2017 5:01 PM, Andrew Cooper wrote:
>> On 08/06/2017 22:17, Boris Ostrovsky wrote:
>>> On 06/08/2017 05:02 PM, Tom Lendacky wrote:
 On 6/8/2017 3:51 PM, Boris Ostrovsky wrote:
>>> What may be needed is making sure X86_FEATURE_SME is not set for PV
>>> guests.
>> And that may be something that Xen will need to control through
>> either
>> CPUID or MSR support for the PV guests.
>
> Only on newer versions of Xen. On earlier versions (2-3 years old)
> leaf
> 0x8007 is passed to the guest unchanged. And so is MSR_K8_SYSCFG.
 The SME feature is in leaf 0x801f, is that leaf passed to the
 guest
 unchanged?
>>> Oh, I misread the patch where X86_FEATURE_SME is defined. Then all
>>> versions, including the current one, pass it unchanged.
>>>
>>> All that's needed is setup_clear_cpu_cap(X86_FEATURE_SME) in
>>> xen_init_capabilities().
>>
>> AMD processors still don't support CPUID Faulting (or at least, I
>> couldn't find any reference to it in the latest docs), so we cannot
>> actually hide SME from a guest which goes looking at native CPUID.
>> Furthermore, I'm not aware of any CPUID masking support covering that
>> leaf.
>>
>> However, if Linux is using the paravirtual cpuid hook, things are
>> slightly better.
>>
>> On Xen 4.9 and later, no guests will see the feature.  On earlier
>> versions of Xen (before I fixed the logic), plain domUs will not see the
>> feature, while dom0 will.
>>
>> For safely, I'd recommend unilaterally clobbering the feature as Boris
>> suggested.  There is no way SME will be supportable on a per-PV guest
>
> That may be too late. Early boot support in head_64.S will make calls to
> check for the feature (through CPUID and MSR), set the sme_me_mask and
> encrypt the kernel in place. Is there another way to approach this?


PV guests don't go through Linux x86 early boot code. They start at
xen_start_kernel() (well, xen-head.S:startup_xen(), really) and  merge
with baremetal path at x86_64_start_reservations() (for 64-bit).


-boris

>
>> basis, although (as far as I am aware) Xen as a whole would be able to
>> encompass itself and all of its PV guests inside one single SME
>> instance.
>
> Yes, that is correct.
>
> Thanks,
> Tom
>
>>
>> ~Andrew
>>


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [Xen-devel] [PATCH v6 10/34] x86, x86/mm, x86/xen, olpc: Use __va() against just the physical address in cr3

2017-06-09 Thread Tom Lendacky

On 6/8/2017 5:01 PM, Andrew Cooper wrote:

On 08/06/2017 22:17, Boris Ostrovsky wrote:

On 06/08/2017 05:02 PM, Tom Lendacky wrote:

On 6/8/2017 3:51 PM, Boris Ostrovsky wrote:

What may be needed is making sure X86_FEATURE_SME is not set for PV
guests.

And that may be something that Xen will need to control through either
CPUID or MSR support for the PV guests.


Only on newer versions of Xen. On earlier versions (2-3 years old) leaf
0x8007 is passed to the guest unchanged. And so is MSR_K8_SYSCFG.

The SME feature is in leaf 0x801f, is that leaf passed to the guest
unchanged?

Oh, I misread the patch where X86_FEATURE_SME is defined. Then all
versions, including the current one, pass it unchanged.

All that's needed is setup_clear_cpu_cap(X86_FEATURE_SME) in
xen_init_capabilities().


AMD processors still don't support CPUID Faulting (or at least, I
couldn't find any reference to it in the latest docs), so we cannot
actually hide SME from a guest which goes looking at native CPUID.
Furthermore, I'm not aware of any CPUID masking support covering that leaf.

However, if Linux is using the paravirtual cpuid hook, things are
slightly better.

On Xen 4.9 and later, no guests will see the feature.  On earlier
versions of Xen (before I fixed the logic), plain domUs will not see the
feature, while dom0 will.

For safely, I'd recommend unilaterally clobbering the feature as Boris
suggested.  There is no way SME will be supportable on a per-PV guest


That may be too late. Early boot support in head_64.S will make calls to
check for the feature (through CPUID and MSR), set the sme_me_mask and
encrypt the kernel in place. Is there another way to approach this?


basis, although (as far as I am aware) Xen as a whole would be able to
encompass itself and all of its PV guests inside one single SME instance.


Yes, that is correct.

Thanks,
Tom



~Andrew



___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 06/34] x86/mm: Add Secure Memory Encryption (SME) support

2017-06-09 Thread Borislav Petkov
On Wed, Jun 07, 2017 at 02:14:16PM -0500, Tom Lendacky wrote:
> Add support for Secure Memory Encryption (SME). This initial support
> provides a Kconfig entry to build the SME support into the kernel and
> defines the memory encryption mask that will be used in subsequent
> patches to mark pages as encrypted.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/Kconfig   |   22 ++
>  arch/x86/include/asm/mem_encrypt.h |   35 +++
>  arch/x86/mm/Makefile   |1 +
>  arch/x86/mm/mem_encrypt.c  |   21 +
>  include/asm-generic/mem_encrypt.h  |   27 +++
>  include/linux/mem_encrypt.h|   18 ++
>  6 files changed, 124 insertions(+)
>  create mode 100644 arch/x86/include/asm/mem_encrypt.h
>  create mode 100644 arch/x86/mm/mem_encrypt.c
>  create mode 100644 include/asm-generic/mem_encrypt.h
>  create mode 100644 include/linux/mem_encrypt.h

Reviewed-by: Borislav Petkov 

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[ANNOUNCE] kexec-tools 2.0.15-rc1

2017-06-09 Thread Simon Horman
Hi all,

I am happy to announce the release of kexec-tools 2.0.15-rc1.

This is an incremental feature pre-release.

So long as no serious problems arise I intend to release kexec-tools 2.0.15
in a weeks time. Testing of 2.0.15-rc1 would be greatly appreciated.

I do not have any outstanding changes for 2.0.15 at this time.
And I would like to only accept bug fixes at this time and take take
features patches once 2.0.15 has been released.

The pre-release can be downloaded from kernel.org:


https://kernel.org/pub/linux/utils/kernel/kexec/kexec-tools-2.0.15-rc1.tar.xz
https://kernel.org/pub/linux/utils/kernel/kexec/

I have also tagged it in git:

https://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git

Thanks to everyone who has contributed to kexec.


For reference the changes since v2.0.14 are:

302e1e362753 kexec-tools 2.0.15-rc1
fab91841c717 Handle additional e820 memmap type strings
c504ff5d85aa arm64: kdump: Add support for binary image files
5f955585c7c9 arm64: kdump: add DT properties to crash dump kernel's dtb
defad947feff arm64: kdump: set up other segments
1591926df5a6 arm64: kdump: set up kernel image segment
0bd5219da953 arm64: kdump: add elf core header segment
c0672c93edcb arm64: kdump: identify memory regions
a17234fe94bc arm64: change return values on error to negative
ef26cc33b8d6 arm64: identify PHYS_OFFSET correctly
325804055e99 kexec: generalize and rename get_kernel_stext_sym()
c95df0e099b1 kexec: extend the semantics of kexec_iomem_for_each_line
59d3e5b5ad6f Fix broken Xen support in configure.ac
4a6d67d9e938 x86: Support large number of memory ranges
0516f46adbf3 crashdump: Remove stray get_crashkernel_region() declaration
82a49747e5ad ppc: Fix format warning with die()
1550f81bf188 x86/x86_64: Fix format warning with die()
47cc70157c66 Don't use %L width specifier with integer values
2f6f6d6fef78 vmcore-dmesg: Define _GNU_SOURCE
896fb2aa30c6 arm64: add uImage support
a0c575793b86 uImage: use 'char *' instead of 'unsigned char *' for 
uImage_probe()
f25146afc5a9 uImage: use 'char *' instead of 'unsigned char *' for uImage_load()
b3d533c1f499 uImage: Add new IH_ARCH_xxx definitions
67234243bb91 uImage: Fix uImage_load() for little-endian machines
0cc1891c4dc8 uImage: fix realloc() pointer confusion
ed15ba1b9977 build_mem_phdrs(): check if p_paddr is invalid
263e45ccf27b Only print debug message when failed to serach for kernel symbol 
from /proc/kallsyms
7dac152d5b47 gitignore: add two generated files in purgatory
05ae4fb2e354 crashdump/sh: Add get_crash_kernel_load_range() function
796f0ffa134d crashdump/s390: Add get_crash_kernel_load_range() function
e4280e22c8c4 crashdump/ppc64: Add get_crash_kernel_load_range() function
7fc80cfcd913 crashdump/ppc: Add get_crash_kernel_load_range() function
d2caf4c4c43b crashdump/mips: Add get_crash_kernel_load_range() function
14d71e51e5c9 crashdump/m68k: Add get_crash_kernel_load_range() function
5c80bd9be295 crashdump/ia64: Add get_crash_kernel_load_range() function
b6af22826f60 crashdump/cris: Add get_crash_kernel_load_range() function
d43610084164 crashdump/arm64: Add get_crash_kernel_load_range() function
cfcf60c38182 crashdump/arm: Add get_crash_kernel_load_range() function
76b31203222a kexec: Add option to get crash kernel region size
e49623b0787d purgatory: Add purgatory.map and purgatory.ro.sym to clean recipe
ceedb33e6cd3 kexec: Remove redundant space from help message
2cf7cb9a6080 kexec: implemented XEN KEXEC STATUS to determine if an image is 
loaded
4eaa36cd01a9 alpha: add missing __NR_kexec_load definition
24aa2d93cac3 kexec: Increase the upper limit for RAM segments
f63d8530b9b6 ppc64: Reduce number of ELF LOAD segments
9da19c0a6f49 kexec-tools 2.0.14.git

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 05/34] x86/CPU/AMD: Handle SME reduction in physical address size

2017-06-09 Thread Borislav Petkov
On Wed, Jun 07, 2017 at 02:14:04PM -0500, Tom Lendacky wrote:
> When System Memory Encryption (SME) is enabled, the physical address
> space is reduced. Adjust the x86_phys_bits value to reflect this
> reduction.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/kernel/cpu/amd.c |   10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)

Reviewed-by: Borislav Petkov 

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 04/34] x86/CPU/AMD: Add the Secure Memory Encryption CPU feature

2017-06-09 Thread Borislav Petkov
On Wed, Jun 07, 2017 at 02:13:53PM -0500, Tom Lendacky wrote:
> Update the CPU features to include identifying and reporting on the
> Secure Memory Encryption (SME) feature.  SME is identified by CPUID
> 0x801f, but requires BIOS support to enable it (set bit 23 of
> MSR_K8_SYSCFG).  Only show the SME feature as available if reported by
> CPUID and enabled by BIOS.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/include/asm/cpufeatures.h |1 +
>  arch/x86/include/asm/msr-index.h   |2 ++
>  arch/x86/kernel/cpu/amd.c  |   13 +
>  arch/x86/kernel/cpu/scattered.c|1 +
>  4 files changed, 17 insertions(+)

Reviewed-by: Borislav Petkov 

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH] s390/crash: Fix KEXEC_NOTE_BYTES definition

2017-06-09 Thread Dave Young
On 06/09/17 at 10:29am, Dave Young wrote:
> On 06/09/17 at 10:17am, Xunlei Pang wrote:
> > S390 KEXEC_NOTE_BYTES is not used by note_buf_t as before, which
> > is now defined as follows:
> > typedef u32 note_buf_t[CRASH_CORE_NOTE_BYTES/4];
> > It was changed by the CONFIG_CRASH_CORE feature.
> > 
> > This patch gets rid of all the old KEXEC_NOTE_BYTES stuff, and
> > renames KEXEC_NOTE_BYTES to CRASH_CORE_NOTE_BYTES for S390.
> > 
> > Fixes: 692f66f26a4c ("crash: move crashkernel parsing and vmcore related 
> > code under CONFIG_CRASH_CORE")
> > Cc: Dave Young 
> > Cc: Dave Anderson 
> > Cc: Hari Bathini 
> > Cc: Gustavo Luiz Duarte 
> > Signed-off-by: Xunlei Pang 
> > ---
> >  arch/s390/include/asm/kexec.h |  2 +-
> >  include/linux/crash_core.h|  7 +++
> >  include/linux/kexec.h | 11 +--
> >  3 files changed, 9 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h
> > index 2f924bc..352deb8 100644
> > --- a/arch/s390/include/asm/kexec.h
> > +++ b/arch/s390/include/asm/kexec.h
> > @@ -47,7 +47,7 @@
> >   * Seven notes plus zero note at the end: prstatus, fpregset, timer,
> >   * tod_cmp, tod_reg, control regs, and prefix
> >   */
> > -#define KEXEC_NOTE_BYTES \
> > +#define CRASH_CORE_NOTE_BYTES \
> > (ALIGN(sizeof(struct elf_note), 4) * 8 + \
> >  ALIGN(sizeof("CORE"), 4) * 7 + \
> >  ALIGN(sizeof(struct elf_prstatus), 4) + \

I found that in mainline since below commit, above define should be
useless, but if distribution with older kernel does need your fix, so in
mainline the right fix should be dropping the s390 part about these
macros usage.

Anyway this need a comment from Michael.

commit 8a07dd02d7615d91d65d6235f7232e3f9b5d347f
Author: Martin Schwidefsky 
Date:   Wed Oct 14 15:53:06 2015 +0200

s390/kdump: remove code to create ELF notes in the crashed system

The s390 architecture can store the CPU registers of the crashed
system
after the kdump kernel has been started and this is the preferred
way.
Remove the remaining code fragments that deal with storing CPU
registers
while the crashed system is still active.

Acked-by: Michael Holzheu 
Signed-off-by: Martin Schwidefsky 


> > diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h
> > index e9de6b4..dbc6e5c 100644
> > --- a/include/linux/crash_core.h
> > +++ b/include/linux/crash_core.h
> > @@ -10,9 +10,16 @@
> >  #define CRASH_CORE_NOTE_NAME_BYTES ALIGN(sizeof(CRASH_CORE_NOTE_NAME), 4)
> >  #define CRASH_CORE_NOTE_DESC_BYTES ALIGN(sizeof(struct elf_prstatus), 4)
> >  
> > +/*
> > + * The per-cpu notes area is a list of notes terminated by a "NULL"
> > + * note header.  For kdump, the code in vmcore.c runs in the context
> > + * of the second kernel to combine them into one note.
> > + */
> > +#ifndef CRASH_CORE_NOTE_BYTES
> >  #define CRASH_CORE_NOTE_BYTES ((CRASH_CORE_NOTE_HEAD_BYTES * 2) +  
> > \
> >  CRASH_CORE_NOTE_NAME_BYTES +   \
> >  CRASH_CORE_NOTE_DESC_BYTES)
> > +#endif
> >  
> >  #define VMCOREINFO_BYTES  PAGE_SIZE
> >  #define VMCOREINFO_NOTE_NAME  "VMCOREINFO"
> > diff --git a/include/linux/kexec.h b/include/linux/kexec.h
> > index 3ea8275..133df03 100644
> > --- a/include/linux/kexec.h
> > +++ b/include/linux/kexec.h
> > @@ -14,7 +14,6 @@
> >  
> >  #if !defined(__ASSEMBLY__)
> >  
> > -#include 
> >  #include 
> >  
> >  #include 
> > @@ -25,6 +24,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  
> >  /* Verify architecture specific macros are defined */
> >  
> > @@ -63,15 +63,6 @@
> >  #define KEXEC_CORE_NOTE_NAME   CRASH_CORE_NOTE_NAME
> >  
> >  /*
> > - * The per-cpu notes area is a list of notes terminated by a "NULL"
> > - * note header.  For kdump, the code in vmcore.c runs in the context
> > - * of the second kernel to combine them into one note.
> > - */
> > -#ifndef KEXEC_NOTE_BYTES
> > -#define KEXEC_NOTE_BYTES   CRASH_CORE_NOTE_BYTES
> > -#endif
> 
> It is still not clear how does s390 use the crash_notes except this macro.
> But from code point of view we do need to update this as well after the
> crash_core splitting.
> 
> Acked-by: Dave Young 

Hold on the ack because of the new findings, wait for Michael's
feedback.

Thanks
Dave

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec