Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-27 Thread Dan Williams
On Tue, Jun 27, 2017 at 3:04 PM, Luck, Tony  wrote:
>> > > > +if (set_memory_np(decoy_addr, 1))
>> > > > +pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n",
>>
>> Another concept to consider is mapping the page as UC rather than
>> completely unmapping it.
>
> UC would also avoid the speculative prefetch issue.  The Vol 3, Section 11.3 
> SDM says:
>
> Strong Uncacheable (UC) -System memory locations are not cached. All reads 
> and writes
> appear on the system bus and are executed in program order without 
> reordering. No speculative
> memory accesses, pagetable walks, or prefetches of speculated branch targets 
> are made.
> This type of cache-control is useful for memory-mapped I/O devices. When used 
> with normal
> RAM, it greatly reduces processor performance.
>
> But then I went and read the code for set_memory_uc() ... which calls 
> "reserve_memtyep()"
> which does all kinds of things to avoid issues with MTRRs and other stuff.  
> Which all looks
> really more complex that we need just here.
>
>> The uncorrectable error scope could be smaller than a page size, like:
>> * memory ECC width (e.g., 8 bytes)
>> * cache line size (e.g., 64 bytes)
>> * block device logical block size (e.g., 512 bytes, for persistent memory)
>>
>> UC preserves the ability to access adjacent data within the page that
>> hasn't gone bad, and is particularly useful for persistent memory.
>
> If you want to dig into the non-poisoned pieces of the page later it might be
> better to set up a new scratch UC mapping to do that.
>
> My takeaway from Dan's comments on unpoisoning is that this isn't the context
> that he wants to do that.  He'd rather wait until he has somebody overwriting 
> the
> page with fresh data.
>
> So I think I'd like to keep the patch as-is.

Yes, the persistent-memory poison interactions should be handled
separately and not hold up this patch for the normal system-memory
case. We might dove-tail support for this into stray write protection
where we unmap all of pmem while nothing in the kernel is actively
accessing it.


RE: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-27 Thread Luck, Tony
> > > > +if (set_memory_np(decoy_addr, 1))
> > > > +pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n",
>
> Another concept to consider is mapping the page as UC rather than
> completely unmapping it.

UC would also avoid the speculative prefetch issue.  The Vol 3, Section 11.3 
SDM says:

Strong Uncacheable (UC) -System memory locations are not cached. All reads and 
writes
appear on the system bus and are executed in program order without reordering. 
No speculative
memory accesses, pagetable walks, or prefetches of speculated branch targets 
are made.
This type of cache-control is useful for memory-mapped I/O devices. When used 
with normal
RAM, it greatly reduces processor performance.

But then I went and read the code for set_memory_uc() ... which calls 
"reserve_memtyep()"
which does all kinds of things to avoid issues with MTRRs and other stuff.  
Which all looks
really more complex that we need just here.

> The uncorrectable error scope could be smaller than a page size, like:
> * memory ECC width (e.g., 8 bytes)
> * cache line size (e.g., 64 bytes)
> * block device logical block size (e.g., 512 bytes, for persistent memory)
>
> UC preserves the ability to access adjacent data within the page that
> hasn't gone bad, and is particularly useful for persistent memory.

If you want to dig into the non-poisoned pieces of the page later it might be
better to set up a new scratch UC mapping to do that.

My takeaway from Dan's comments on unpoisoning is that this isn't the context
that he wants to do that.  He'd rather wait until he has somebody overwriting 
the
page with fresh data.

So I think I'd like to keep the patch as-is.

-Tony


RE: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-23 Thread Elliott, Robert (Persistent Memory)
> > > + if (set_memory_np(decoy_addr, 1))
> > > + pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n",

Another concept to consider is mapping the page as UC rather than
completely unmapping it.

The uncorrectable error scope could be smaller than a page size, like:
* memory ECC width (e.g., 8 bytes)
* cache line size (e.g., 64 bytes)
* block device logical block size (e.g., 512 bytes, for persistent memory)

UC preserves the ability to access adjacent data within the page that
hasn't gone bad, and is particularly useful for persistent memory.

---
Robert Elliott, HPE Persistent Memory




Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-23 Thread Luck, Tony
On Thu, Jun 22, 2017 at 10:07:18PM -0700, Dan Williams wrote:
> On Wed, Jun 21, 2017 at 1:30 PM, Luck, Tony  wrote:
> >> Persistent memory does have unpoisoning and would require this inverse
> >> operation - see drivers/nvdimm/pmem.c pmem_clear_poison() and core.c
> >> nvdimm_clear_poison().
> >
> > Nice.  Well this code will need to cooperate with that ... in particular if 
> > the page
> > is in an area that can be unpoisoned ... then we should do that *instead* 
> > of marking
> > the page not present (which breaks up huge/large pages and so affects 
> > performance).
> >
> > Instead of calling it "arch_unmap_pfn" it could be called something like 
> > arch_handle_poison()
> > and do something like:
> >
> > void arch_handle_poison(unsigned long pfn)
> > {
> > if this is a pmem page && pmem_clear_poison(pfn)
> > return
> > if this is a nvdimm page && nvdimm_clear_poison(pfn)
> > return
> > /* can't clear, map out from 1:1 region */
> > ... code from my patch ...
> > }
> >
> > I'm just not sure how those first two "if" bits work ... particularly in 
> > terms of CONFIG dependencies and system
> > capabilities.  Perhaps each of pmem and nvdimm could register their 
> > unpoison functions and this code could
> > just call each in turn?
> 
> We don't unpoison pmem without new data to write in it's place. What
> context is arch_handle_poison() called? Ideally we only "clear" poison
> when we know we are trying to write zero over the poisoned range.

Context is that of the process that did the access (but we've moved
off the machine check stack and are now in normal kernel context).
We are about to unmap this page from all applications that are
using it.  But they may be running ... so now it a bad time to
clear the poison. They might access the page and not get a signal.

If I move this code to after all the users PTEs have been cleared
and TLBs flushed, then it would be safe to try to unpoison the page
and not invalidate from the 1:1 mapping.

But I'm not sure what happens next. For a normal DDR4 page I could
put it back on the free list and allow it to be re-used. But for
PMEM you have some other cleanup that you need to do to mark the
block as lost from your file system.

Is this too early for you to be able to do that?

-Tony


Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-22 Thread Dan Williams
On Wed, Jun 21, 2017 at 1:30 PM, Luck, Tony  wrote:
>> Persistent memory does have unpoisoning and would require this inverse
>> operation - see drivers/nvdimm/pmem.c pmem_clear_poison() and core.c
>> nvdimm_clear_poison().
>
> Nice.  Well this code will need to cooperate with that ... in particular if 
> the page
> is in an area that can be unpoisoned ... then we should do that *instead* of 
> marking
> the page not present (which breaks up huge/large pages and so affects 
> performance).
>
> Instead of calling it "arch_unmap_pfn" it could be called something like 
> arch_handle_poison()
> and do something like:
>
> void arch_handle_poison(unsigned long pfn)
> {
> if this is a pmem page && pmem_clear_poison(pfn)
> return
> if this is a nvdimm page && nvdimm_clear_poison(pfn)
> return
> /* can't clear, map out from 1:1 region */
> ... code from my patch ...
> }
>
> I'm just not sure how those first two "if" bits work ... particularly in 
> terms of CONFIG dependencies and system
> capabilities.  Perhaps each of pmem and nvdimm could register their unpoison 
> functions and this code could
> just call each in turn?

We don't unpoison pmem without new data to write in it's place. What
context is arch_handle_poison() called? Ideally we only "clear" poison
when we know we are trying to write zero over the poisoned range.


Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-22 Thread Borislav Petkov
On Wed, Jun 21, 2017 at 10:47:40AM -0700, Luck, Tony wrote:
> I would if I could work out how to use it. From reading the manual
> page there seem to be a few options to this, but none of them appear
> to just drop a specific address (apart from my own). :-(

$ git send-email --to ... --cc ... --cc ... --suppress-cc=all ...

That should send only to the ones you have in --to and --cc and suppress
the rest.

Do a

$ git send-email -v --dry-run --to ... --cc ... --cc ... --suppress-cc=all ...

to see what it is going to do.

> I'd assume that other X86 implementations would face similar issues (unless
> they have extremely cautious pre-fetchers and/or no speculation).
> 
> I'm also assuming that non-X86 architectures that do recovery may want this
> too ... hence hooking the arch_unmap_kpfn() function into the generic
> memory_failure() code.

Which means that you could move the function to generic
mm/memory_failure.c code after making the decoy_addr computation
generic.

I'd still like to hear some sort of confirmation from other
vendors/arches whether it makes sense for them too, though.

I mean, if they don't do speculative accesses, then it probably doesn't
matter even - the page is innacessible anyway but still...

-- 
Regards/Gruss,
Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
-- 


RE: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-21 Thread Luck, Tony
> Persistent memory does have unpoisoning and would require this inverse
> operation - see drivers/nvdimm/pmem.c pmem_clear_poison() and core.c
> nvdimm_clear_poison().

Nice.  Well this code will need to cooperate with that ... in particular if the 
page
is in an area that can be unpoisoned ... then we should do that *instead* of 
marking
the page not present (which breaks up huge/large pages and so affects 
performance).

Instead of calling it "arch_unmap_pfn" it could be called something like 
arch_handle_poison()
and do something like:

void arch_handle_poison(unsigned long pfn)
{
if this is a pmem page && pmem_clear_poison(pfn)
return
if this is a nvdimm page && nvdimm_clear_poison(pfn)
return
/* can't clear, map out from 1:1 region */
... code from my patch ...
}

I'm just not sure how those first two "if" bits work ... particularly in terms 
of CONFIG dependencies and system
capabilities.  Perhaps each of pmem and nvdimm could register their unpoison 
functions and this code could
just call each in turn?

-Tony




RE: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-21 Thread Luck, Tony
>> +if (set_memory_np(decoy_addr, 1))
>> +pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", pfn);
>
> Does this patch handle breaking up 512 GiB, 1 GiB or 2 MiB page mappings
> if it's just trying to mark a 4 KiB page as bad?

Yes.  The 1:1 mappings start out using the largest supported page size.  This
call will break up huge/large pages so that only 4KB is mapped out.
[This will affect performance because of the extra levels of TLB walks]

> Although the kernel doesn't use MTRRs itself anymore, what if the system
> BIOS still uses them for some memory regions, and the bad address falls in
> an MTRR region?

This code is called after mm/memory-failure.c:memory_failure() has already
checked that the page is one managed by the kernel.  In general machine checks
from other regions are going to be called out as fatal before we get here.

-Tony






RE: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-21 Thread Elliott, Robert (Persistent Memory)
> + decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
> +#else
> +#error "no unused virtual bit available"
> +#endif
> +
> + if (set_memory_np(decoy_addr, 1))
> + pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", pfn);

Does this patch handle breaking up 512 GiB, 1 GiB or 2 MiB page mappings
if it's just trying to mark a 4 KiB page as bad?

Although the kernel doesn't use MTRRs itself anymore, what if the system
BIOS still uses them for some memory regions, and the bad address falls in
an MTRR region?

---
Robert Elliott, HPE Persistent Memory






RE: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-21 Thread Elliott, Robert (Persistent Memory)

> -Original Message-
> From: linux-kernel-ow...@vger.kernel.org [mailto:linux-kernel-
> ow...@vger.kernel.org] On Behalf Of Luck, Tony
> Sent: Wednesday, June 21, 2017 12:54 PM
> To: Naoya Horiguchi 
> Cc: Borislav Petkov ; Dave Hansen ;
> x...@kernel.org; linux...@kvack.org; linux-kernel@vger.kernel.org

(adding linux-nvdimm list in this reply)

> Subject: Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1
> mappings of poison pages
> 
> On Wed, Jun 21, 2017 at 02:12:27AM +, Naoya Horiguchi wrote:
> 
> > We had better have a reverse operation of this to cancel the unmapping
> > when unpoisoning?
> 
> When we have unpoisoning, we can add something.  We don't seem to have
> an inverse function for "set_memory_np" to just flip the _PRESENT bit
> back on again. But it would be trivial to write a set_memory_pp().
> 
> Since we'd be doing this after the poison has been cleared, we wouldn't
> need to play games with the address.  We'd just use:
> 
>   set_memory_pp((unsigned long)pfn_to_kaddr(pfn), 1);
> 
> -Tony

Persistent memory does have unpoisoning and would require this inverse
operation - see drivers/nvdimm/pmem.c pmem_clear_poison() and core.c
nvdimm_clear_poison().

---
Robert Elliott, HPE Persistent Memory






Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-21 Thread Luck, Tony
On Wed, Jun 21, 2017 at 02:12:27AM +, Naoya Horiguchi wrote:

> We had better have a reverse operation of this to cancel the unmapping
> when unpoisoning?

When we have unpoisoning, we can add something.  We don't seem to have
an inverse function for "set_memory_np" to just flip the _PRESENT bit
back on again. But it would be trivial to write a set_memory_pp().

Since we'd be doing this after the poison has been cleared, we wouldn't
need to play games with the address.  We'd just use:

set_memory_pp((unsigned long)pfn_to_kaddr(pfn), 1);

-Tony


Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-21 Thread Luck, Tony
On Mon, Jun 19, 2017 at 08:01:47PM +0200, Borislav Petkov wrote:
> (drop stable from CC)
> 
> You could use git's --suppress-cc= option when sending.

I would if I could work out how to use it. From reading the manual
page there seem to be a few options to this, but none of them appear
to just drop a specific address (apart from my own). :-(

> > +#ifdef CONFIG_X86_64
> > +
> > +void arch_unmap_kpfn(unsigned long pfn)
> > +{
> 
> I guess you can move the ifdeffery inside the function.

If I do, then the compiler will emit an empty function. It's only
a couple of bytes for the "ret" ... but why?  I may change it
to:

   #if defined(arch_unmap_kpfn) && defined(CONFIG_MEMORY_FAILURE)

to narrow down further when we need this.

> > +#if PGDIR_SHIFT + 9 < 63 /* 9 because cpp doesn't grok ilog2(PTRS_PER_PGD) 
> > */
> 
> Please no side comments.

Ok.

> Also, explain why the build-time check. (Sign-extension going away for VA
> space yadda yadda..., 5 2/3 level paging :-))

Will add.

> Also, I'm assuming this whole "workaround" of sorts should be Intel-only?

I'd assume that other X86 implementations would face similar issues (unless
they have extremely cautious pre-fetchers and/or no speculation).

I'm also assuming that non-X86 architectures that do recovery may want this
too ... hence hooking the arch_unmap_kpfn() function into the generic
memory_failure() code.

> > +   decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
> > +#else
> > +#error "no unused virtual bit available"
> > +#endif
> > +
> > +   if (set_memory_np(decoy_addr, 1))
> > +   pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", pfn);
> 
> WARNING: unnecessary whitespace before a quoted newline
> #107: FILE: arch/x86/kernel/cpu/mcheck/mce.c:1089:
> +   pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", 
> pfn);

Oops!  Will fix.

-Tony


Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-20 Thread Naoya Horiguchi
(drop stable from CC)

On Fri, Jun 16, 2017 at 12:02:00PM -0700, Luck, Tony wrote:
> From: Tony Luck 
> 
> Speculative processor accesses may reference any memory that has a
> valid page table entry.  While a speculative access won't generate
> a machine check, it will log the error in a machine check bank. That
> could cause escalation of a subsequent error since the overflow bit
> will be then set in the machine check bank status register.
> 
> Code has to be double-plus-tricky to avoid mentioning the 1:1 virtual
> address of the page we want to map out otherwise we may trigger the
> very problem we are trying to avoid.  We use a non-canonical address
> that passes through the usual Linux table walking code to get to the
> same "pte".
> 
> Cc: Dave Hansen 
> Cc: Naoya Horiguchi 
> Cc: x...@kernel.org
> Cc: linux...@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Cc: sta...@vger.kernel.org
> Signed-off-by: Tony Luck 
> ---
> Thanks to Dave Hansen for reviewing several iterations of this.
> 
>  arch/x86/include/asm/page_64.h   |  4 
>  arch/x86/kernel/cpu/mcheck/mce.c | 35 +++
>  include/linux/mm_inline.h|  6 ++
>  mm/memory-failure.c  |  2 ++
>  4 files changed, 47 insertions(+)
> 
> diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
> index b4a0d43248cf..b50df06ad251 100644
> --- a/arch/x86/include/asm/page_64.h
> +++ b/arch/x86/include/asm/page_64.h
> @@ -51,6 +51,10 @@ static inline void clear_page(void *page)
>  
>  void copy_page(void *to, void *from);
>  
> +#ifdef CONFIG_X86_MCE
> +#define arch_unmap_kpfn arch_unmap_kpfn
> +#endif
> +
>  #endif   /* !__ASSEMBLY__ */
>  
>  #ifdef CONFIG_X86_VSYSCALL_EMULATION
> diff --git a/arch/x86/kernel/cpu/mcheck/mce.c 
> b/arch/x86/kernel/cpu/mcheck/mce.c
> index 5cfbaeb6529a..56563db0b2be 100644
> --- a/arch/x86/kernel/cpu/mcheck/mce.c
> +++ b/arch/x86/kernel/cpu/mcheck/mce.c
> @@ -51,6 +51,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "mce-internal.h"
>  
> @@ -1056,6 +1057,40 @@ static int do_memory_failure(struct mce *m)
>   return ret;
>  }
>  
> +#ifdef CONFIG_X86_64
> +
> +void arch_unmap_kpfn(unsigned long pfn)
> +{
> + unsigned long decoy_addr;
> +
> + /*
> +  * Unmap this page from the kernel 1:1 mappings to make sure
> +  * we don't log more errors because of speculative access to
> +  * the page.
> +  * We would like to just call:
> +  *  set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1);
> +  * but doing that would radically increase the odds of a
> +  * speculative access to the posion page because we'd have
> +  * the virtual address of the kernel 1:1 mapping sitting
> +  * around in registers.
> +  * Instead we get tricky.  We create a non-canonical address
> +  * that looks just like the one we want, but has bit 63 flipped.
> +  * This relies on set_memory_np() not checking whether we passed
> +  * a legal address.
> +  */
> +
> +#if PGDIR_SHIFT + 9 < 63 /* 9 because cpp doesn't grok ilog2(PTRS_PER_PGD) */
> + decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
> +#else
> +#error "no unused virtual bit available"
> +#endif
> +
> + if (set_memory_np(decoy_addr, 1))
> + pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", pfn);
> +
> +}
> +#endif
> +
>  /*
>   * The actual machine check handler. This only handles real
>   * exceptions when something got corrupted coming in through int 18.
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index e030a68ead7e..25438b2b6f22 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -126,4 +126,10 @@ static __always_inline enum lru_list page_lru(struct 
> page *page)
>  
>  #define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
>  
> +#ifdef arch_unmap_kpfn
> +extern void arch_unmap_kpfn(unsigned long pfn);
> +#else
> +static __always_inline void arch_unmap_kpfn(unsigned long pfn) { }
> +#endif
> +
>  #endif
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 342fac9ba89b..9479e190dcbd 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1071,6 +1071,8 @@ int memory_failure(unsigned long pfn, int trapno, int 
> flags)
>   return 0;
>   }
>  
> + arch_unmap_kpfn(pfn);
> +

We had better have a reverse operation of this to cancel the unmapping
when unpoisoning?

Thanks,
Naoya Horiguchi


Re: [PATCH] mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages

2017-06-19 Thread Borislav Petkov
(drop stable from CC)

You could use git's --suppress-cc= option when sending.

On Fri, Jun 16, 2017 at 12:02:00PM -0700, Luck, Tony wrote:
> From: Tony Luck 
> 
> Speculative processor accesses may reference any memory that has a
> valid page table entry.  While a speculative access won't generate
> a machine check, it will log the error in a machine check bank. That
> could cause escalation of a subsequent error since the overflow bit
> will be then set in the machine check bank status register.

...

> @@ -1056,6 +1057,40 @@ static int do_memory_failure(struct mce *m)
>   return ret;
>  }
>  
> +#ifdef CONFIG_X86_64
> +
> +void arch_unmap_kpfn(unsigned long pfn)
> +{

I guess you can move the ifdeffery inside the function.

> + unsigned long decoy_addr;
> +
> + /*
> +  * Unmap this page from the kernel 1:1 mappings to make sure
> +  * we don't log more errors because of speculative access to
> +  * the page.
> +  * We would like to just call:
> +  *  set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1);
> +  * but doing that would radically increase the odds of a
> +  * speculative access to the posion page because we'd have
> +  * the virtual address of the kernel 1:1 mapping sitting
> +  * around in registers.
> +  * Instead we get tricky.  We create a non-canonical address
> +  * that looks just like the one we want, but has bit 63 flipped.
> +  * This relies on set_memory_np() not checking whether we passed
> +  * a legal address.
> +  */
> +
> +#if PGDIR_SHIFT + 9 < 63 /* 9 because cpp doesn't grok ilog2(PTRS_PER_PGD) */

Please no side comments.

Also, explain why the build-time check. (Sign-extension going away for VA
space yadda yadda..., 5 2/3 level paging :-))

Also, I'm assuming this whole "workaround" of sorts should be Intel-only?

> + decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
> +#else
> +#error "no unused virtual bit available"
> +#endif
> +
> + if (set_memory_np(decoy_addr, 1))
> + pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", pfn);

WARNING: unnecessary whitespace before a quoted newline
#107: FILE: arch/x86/kernel/cpu/mcheck/mce.c:1089:
+   pr_warn("Could not invalidate pfn=0x%lx from 1:1 map \n", pfn);


-- 
Regards/Gruss,
Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
--