Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-19 Thread Thomas Gleixner
On Fri, 19 Jul 2019, Joerg Roedel wrote:
> On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote:
> > Joerg,
> > 
> > On Thu, 18 Jul 2019, Joerg Roedel wrote:
> > > On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote:
> > > > On Wed, 17 Jul 2019, Joerg Roedel wrote:
> > > > > +
> > > > > + if (!pmd_present(*pmd_k))
> > > > > + return NULL;
> > > > >   else
> > > > >   BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
> > > > 
> > > > So in case of unmap, this updates only the first entry in the pgd_list
> > > > because vmalloc_sync_all() will break out of the iteration over pgd_list
> > > > when NULL is returned from vmalloc_sync_one().
> > > > 
> > > > I'm surely missing something, but how is that supposed to sync _all_ 
> > > > page
> > > > tables on unmap as the changelog claims?
> > > 
> > > No, you are right, I missed that. It is a bug in this patch, the code
> > > that breaks out of the loop in vmalloc_sync_all() needs to be removed as
> > > well. Will do that in the next version.
> > 
> > I assume that p4d/pud do not need the pmd treatment, but a comment
> > explaining why would be appreciated.
> 
> Actually there is already a comment in this function explaining why p4d
> and pud don't need any treatment:
> 
> /*
>  * set_pgd(pgd, *pgd_k); here would be useless on PAE
>  * and redundant with the set_pmd() on non-PAE. As would
>  * set_p4d/set_pud.
>  */ 

Indeed. Why did I think there was none?

> I couldn't say it with less words :)

It's perfectly fine.

Thanks,

tglx


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-19 Thread Joerg Roedel
On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote:
> Joerg,
> 
> On Thu, 18 Jul 2019, Joerg Roedel wrote:
> > On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote:
> > > On Wed, 17 Jul 2019, Joerg Roedel wrote:
> > > > +
> > > > +   if (!pmd_present(*pmd_k))
> > > > +   return NULL;
> > > > else
> > > > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
> > > 
> > > So in case of unmap, this updates only the first entry in the pgd_list
> > > because vmalloc_sync_all() will break out of the iteration over pgd_list
> > > when NULL is returned from vmalloc_sync_one().
> > > 
> > > I'm surely missing something, but how is that supposed to sync _all_ page
> > > tables on unmap as the changelog claims?
> > 
> > No, you are right, I missed that. It is a bug in this patch, the code
> > that breaks out of the loop in vmalloc_sync_all() needs to be removed as
> > well. Will do that in the next version.
> 
> I assume that p4d/pud do not need the pmd treatment, but a comment
> explaining why would be appreciated.

Actually there is already a comment in this function explaining why p4d
and pud don't need any treatment:

/*
 * set_pgd(pgd, *pgd_k); here would be useless on PAE
 * and redundant with the set_pmd() on non-PAE. As would
 * set_p4d/set_pud.
 */ 

I couldn't say it with less words :)


Regards,

Joerg


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-18 Thread Joerg Roedel
On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote:
> On Thu, 18 Jul 2019, Joerg Roedel wrote:
> > No, you are right, I missed that. It is a bug in this patch, the code
> > that breaks out of the loop in vmalloc_sync_all() needs to be removed as
> > well. Will do that in the next version.
> 
> I assume that p4d/pud do not need the pmd treatment, but a comment
> explaining why would be appreciated.

Yes, p4d and pud don't need to be handled here, as the code is 32-bit
only and there p4d is folded anyway. Pud is only relevant for PAE and
will already be mapped when the page-table is created (for performance
reasons, because pud is top-level at PAE and mapping it later requires a
TLB flush).
The pud with PAE also never changes during the life-time of the
page-table because we can't map a huge-page there. I will put that into
a comment.

Thanks,

Joerg


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-18 Thread Thomas Gleixner
Joerg,

On Thu, 18 Jul 2019, Joerg Roedel wrote:
> On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote:
> > On Wed, 17 Jul 2019, Joerg Roedel wrote:
> > > +
> > > + if (!pmd_present(*pmd_k))
> > > + return NULL;
> > >   else
> > >   BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
> > 
> > So in case of unmap, this updates only the first entry in the pgd_list
> > because vmalloc_sync_all() will break out of the iteration over pgd_list
> > when NULL is returned from vmalloc_sync_one().
> > 
> > I'm surely missing something, but how is that supposed to sync _all_ page
> > tables on unmap as the changelog claims?
> 
> No, you are right, I missed that. It is a bug in this patch, the code
> that breaks out of the loop in vmalloc_sync_all() needs to be removed as
> well. Will do that in the next version.

I assume that p4d/pud do not need the pmd treatment, but a comment
explaining why would be appreciated.

Thanks,

tglx


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-18 Thread Joerg Roedel
Hi Thomas,

On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote:
> On Wed, 17 Jul 2019, Joerg Roedel wrote:
> > +
> > +   if (!pmd_present(*pmd_k))
> > +   return NULL;
> > else
> > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
> 
> So in case of unmap, this updates only the first entry in the pgd_list
> because vmalloc_sync_all() will break out of the iteration over pgd_list
> when NULL is returned from vmalloc_sync_one().
> 
> I'm surely missing something, but how is that supposed to sync _all_ page
> tables on unmap as the changelog claims?

No, you are right, I missed that. It is a bug in this patch, the code
that breaks out of the loop in vmalloc_sync_all() needs to be removed as
well. Will do that in the next version.


Thanks,

Joerg


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-18 Thread Joerg Roedel
Hi Dave,

On Wed, Jul 17, 2019 at 02:06:01PM -0700, Dave Hansen wrote:
> On 7/17/19 12:14 AM, Joerg Roedel wrote:
> > -   if (!pmd_present(*pmd))
> > +   if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
> > set_pmd(pmd, *pmd_k);
> 
> Wouldn't:
> 
>   if (pmd_present(*pmd) != pmd_present(*pmd_k))
>   set_pmd(pmd, *pmd_k);
> 
> be a bit more intuitive?

Yes, right. That is much better, I changed it in the patch.

> But, either way, these look fine.  For the series:
> 
> Reviewed-by: Dave Hansen 

Thanks!


Joerg


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-17 Thread Thomas Gleixner
On Wed, 17 Jul 2019, Joerg Roedel wrote:

> From: Joerg Roedel 
> 
> With huge-page ioremap areas the unmappings also need to be
> synced between all page-tables. Otherwise it can cause data
> corruption when a region is unmapped and later re-used.
> 
> Make the vmalloc_sync_one() function ready to sync
> unmappings.
> 
> Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F')
> Signed-off-by: Joerg Roedel 
> ---
>  arch/x86/mm/fault.c | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 4a4049f6d458..d71e167662c3 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, 
> unsigned long address)
>  
>   pmd = pmd_offset(pud, address);
>   pmd_k = pmd_offset(pud_k, address);
> - if (!pmd_present(*pmd_k))
> - return NULL;
>  
> - if (!pmd_present(*pmd))
> + if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
>   set_pmd(pmd, *pmd_k);
> +
> + if (!pmd_present(*pmd_k))
> + return NULL;
>   else
>   BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));

So in case of unmap, this updates only the first entry in the pgd_list
because vmalloc_sync_all() will break out of the iteration over pgd_list
when NULL is returned from vmalloc_sync_one().

I'm surely missing something, but how is that supposed to sync _all_ page
tables on unmap as the changelog claims?

Thanks,

tglx


Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-17 Thread Dave Hansen
On 7/17/19 12:14 AM, Joerg Roedel wrote:
> 
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 4a4049f6d458..d71e167662c3 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, 
> unsigned long address)
>  
>   pmd = pmd_offset(pud, address);
>   pmd_k = pmd_offset(pud_k, address);
> - if (!pmd_present(*pmd_k))
> - return NULL;
>  
> - if (!pmd_present(*pmd))
> + if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
>   set_pmd(pmd, *pmd_k);

Wouldn't:

if (pmd_present(*pmd) != pmd_present(*pmd_k))
set_pmd(pmd, *pmd_k);

be a bit more intuitive?

But, either way, these look fine.  For the series:

Reviewed-by: Dave Hansen 


[PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-17 Thread Joerg Roedel
From: Joerg Roedel 

With huge-page ioremap areas the unmappings also need to be
synced between all page-tables. Otherwise it can cause data
corruption when a region is unmapped and later re-used.

Make the vmalloc_sync_one() function ready to sync
unmappings.

Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel 
---
 arch/x86/mm/fault.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 4a4049f6d458..d71e167662c3 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, 
unsigned long address)
 
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
-   if (!pmd_present(*pmd_k))
-   return NULL;
 
-   if (!pmd_present(*pmd))
+   if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
set_pmd(pmd, *pmd_k);
+
+   if (!pmd_present(*pmd_k))
+   return NULL;
else
BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
 
-- 
2.17.1



Re: [PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-15 Thread Thomas Gleixner
On Mon, 15 Jul 2019, Joerg Roedel wrote:

> From: Joerg Roedel 
> 
> With huge-page ioremap areas the unmappings also need to be
> synced between all page-tables. Otherwise it can cause data
> corruption when a region is unmapped and later re-used.
> 
> Make the vmalloc_sync_one() function ready to sync
> unmappings.
> 
> Signed-off-by: Joerg Roedel 

Lacks a Fixes tag methinks.

> ---
>  arch/x86/mm/fault.c | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 4a4049f6d458..d71e167662c3 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, 
> unsigned long address)
>  
>   pmd = pmd_offset(pud, address);
>   pmd_k = pmd_offset(pud_k, address);
> - if (!pmd_present(*pmd_k))
> - return NULL;
>  
> - if (!pmd_present(*pmd))
> + if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
>   set_pmd(pmd, *pmd_k);

It took me a while to understand what this is doing. Can we please have a
comment here?

> +
> + if (!pmd_present(*pmd_k))
> + return NULL;

Thanks,

tglx


[PATCH 2/3] x86/mm: Sync also unmappings in vmalloc_sync_one()

2019-07-15 Thread Joerg Roedel
From: Joerg Roedel 

With huge-page ioremap areas the unmappings also need to be
synced between all page-tables. Otherwise it can cause data
corruption when a region is unmapped and later re-used.

Make the vmalloc_sync_one() function ready to sync
unmappings.

Signed-off-by: Joerg Roedel 
---
 arch/x86/mm/fault.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 4a4049f6d458..d71e167662c3 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, 
unsigned long address)
 
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
-   if (!pmd_present(*pmd_k))
-   return NULL;
 
-   if (!pmd_present(*pmd))
+   if (pmd_present(*pmd) ^ pmd_present(*pmd_k))
set_pmd(pmd, *pmd_k);
+
+   if (!pmd_present(*pmd_k))
+   return NULL;
else
BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
 
-- 
2.17.1