Re: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()

2020-04-07 Thread Andrew Cooper
On 07/04/2020 12:07, Jan Beulich wrote:
> Commit 0537d246f8db ("mm: add 'is_special_page' inline function...")
> moved the is_special_page() checks first in its respective changes to
> PoD code. While this is fine for p2m_pod_zero_check_superpage(), the
> validity of the MFN is inferred in both cases from the p2m_is_ram()
> check, which therefore also needs to come first in this 2nd instance.
>
> Take the opportunity and address latent UB here as well - transform
> the MFN into struct page_info * only after having established that
> this is a valid page.
>
> Signed-off-by: Jan Beulich 

Acked-by: Andrew Cooper 



Re: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()

2020-04-07 Thread Jan Beulich
On 07.04.2020 14:39, Paul Durrant wrote:
>> -Original Message-
>> From: Jan Beulich 
>> Sent: 07 April 2020 12:08
>> To: xen-devel@lists.xenproject.org
>> Cc: Andrew Cooper ; Roger Pau Monné 
>> ; Wei Liu
>> ; Paul Durrant 
>> Subject: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()
>>
>> Commit 0537d246f8db ("mm: add 'is_special_page' inline function...")
>> moved the is_special_page() checks first in its respective changes to
>> PoD code. While this is fine for p2m_pod_zero_check_superpage(), the
>> validity of the MFN is inferred in both cases from the p2m_is_ram()
>> check, which therefore also needs to come first in this 2nd instance.
>>
>> Take the opportunity and address latent UB here as well - transform
>> the MFN into struct page_info * only after having established that
>> this is a valid page.
>>
>> Signed-off-by: Jan Beulich 
> 
> Reviewed-by: Paul Durrant 

Thanks.

> ...with a suggestion below
> 
>> ---
>> I will admit that this was build tested only. I did observe the crash
>> late yesterday while in the office, but got around to analyzing it only
>> today, where I'm again restricted in what I can reasonably test.
>>
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2
>>  for ( i = 0; i < count; i++ )
>>  {
>>  p2m_access_t a;
>> -struct page_info *pg;
>>
>>  mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, ,
>>   0, NULL, NULL);
>> -pg = mfn_to_page(mfns[i]);
>>
>>  /*
>>   * If this is ram, and not a pagetable or a special page, and
>>   * probably not mapped elsewhere, map it; otherwise, skip.
>>   */
>> -if ( !is_special_page(pg) && p2m_is_ram(types[i]) &&
>> - (pg->count_info & PGC_allocated) &&
>> - !(pg->count_info & PGC_page_table) &&
>> - ((pg->count_info & PGC_count_mask) <= max_ref) )
>> -map[i] = map_domain_page(mfns[i]);
>> -else
>> -map[i] = NULL;
>> +map[i] = NULL;
>> +if ( p2m_is_ram(types[i]) )
>> +{
>> +const struct page_info *pg = mfn_to_page(mfns[i]);
> 
> Perhaps have local scope stack variable for count_info too?

I'd view this as useful only if ...

>> +
>> +if ( !is_special_page(pg) &&

... this could then also be made make use of it.

Jan



RE: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()

2020-04-07 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich 
> Sent: 07 April 2020 12:08
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper ; Roger Pau Monné 
> ; Wei Liu
> ; Paul Durrant 
> Subject: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()
> 
> Commit 0537d246f8db ("mm: add 'is_special_page' inline function...")
> moved the is_special_page() checks first in its respective changes to
> PoD code. While this is fine for p2m_pod_zero_check_superpage(), the
> validity of the MFN is inferred in both cases from the p2m_is_ram()
> check, which therefore also needs to come first in this 2nd instance.
> 
> Take the opportunity and address latent UB here as well - transform
> the MFN into struct page_info * only after having established that
> this is a valid page.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Paul Durrant 

...with a suggestion below

> ---
> I will admit that this was build tested only. I did observe the crash
> late yesterday while in the office, but got around to analyzing it only
> today, where I'm again restricted in what I can reasonably test.
> 
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2
>  for ( i = 0; i < count; i++ )
>  {
>  p2m_access_t a;
> -struct page_info *pg;
> 
>  mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, ,
>   0, NULL, NULL);
> -pg = mfn_to_page(mfns[i]);
> 
>  /*
>   * If this is ram, and not a pagetable or a special page, and
>   * probably not mapped elsewhere, map it; otherwise, skip.
>   */
> -if ( !is_special_page(pg) && p2m_is_ram(types[i]) &&
> - (pg->count_info & PGC_allocated) &&
> - !(pg->count_info & PGC_page_table) &&
> - ((pg->count_info & PGC_count_mask) <= max_ref) )
> -map[i] = map_domain_page(mfns[i]);
> -else
> -map[i] = NULL;
> +map[i] = NULL;
> +if ( p2m_is_ram(types[i]) )
> +{
> +const struct page_info *pg = mfn_to_page(mfns[i]);

Perhaps have local scope stack variable for count_info too?

> +
> +if ( !is_special_page(pg) &&
> + (pg->count_info & PGC_allocated) &&
> + !(pg->count_info & PGC_page_table) &&
> + ((pg->count_info & PGC_count_mask) <= max_ref) )
> +map[i] = map_domain_page(mfns[i]);
> +}
>  }
> 
>  /*




[PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check()

2020-04-07 Thread Jan Beulich
Commit 0537d246f8db ("mm: add 'is_special_page' inline function...")
moved the is_special_page() checks first in its respective changes to
PoD code. While this is fine for p2m_pod_zero_check_superpage(), the
validity of the MFN is inferred in both cases from the p2m_is_ram()
check, which therefore also needs to come first in this 2nd instance.

Take the opportunity and address latent UB here as well - transform
the MFN into struct page_info * only after having established that
this is a valid page.

Signed-off-by: Jan Beulich 
---
I will admit that this was build tested only. I did observe the crash
late yesterday while in the office, but got around to analyzing it only
today, where I'm again restricted in what I can reasonably test.

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2
 for ( i = 0; i < count; i++ )
 {
 p2m_access_t a;
-struct page_info *pg;
 
 mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, ,
  0, NULL, NULL);
-pg = mfn_to_page(mfns[i]);
 
 /*
  * If this is ram, and not a pagetable or a special page, and
  * probably not mapped elsewhere, map it; otherwise, skip.
  */
-if ( !is_special_page(pg) && p2m_is_ram(types[i]) &&
- (pg->count_info & PGC_allocated) &&
- !(pg->count_info & PGC_page_table) &&
- ((pg->count_info & PGC_count_mask) <= max_ref) )
-map[i] = map_domain_page(mfns[i]);
-else
-map[i] = NULL;
+map[i] = NULL;
+if ( p2m_is_ram(types[i]) )
+{
+const struct page_info *pg = mfn_to_page(mfns[i]);
+
+if ( !is_special_page(pg) &&
+ (pg->count_info & PGC_allocated) &&
+ !(pg->count_info & PGC_page_table) &&
+ ((pg->count_info & PGC_count_mask) <= max_ref) )
+map[i] = map_domain_page(mfns[i]);
+}
 }
 
 /*