Re: [Xen-devel] [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code

2019-02-26 Thread Jan Beulich
>>> On 25.02.19 at 18:42,  wrote:
> On 2/21/19 4:50 PM, Roger Pau Monne wrote:
>> @@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>>  {
>>  new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * 
>> PAGETABLE_ORDER)),
>>   flags);
>> -p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
>> +rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 
>> level);
>> +if ( rc )
>> +{
>> +ASSERT_UNREACHABLE();
>> +break;
>> +}
>>  }
>>  
>>  unmap_domain_page(l1_entry);
>>  
>> -new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>> -p2m_add_iommu_flags(_entry, level, 
>> IOMMUF_readable|IOMMUF_writable);
>> -p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>> +if ( !rc )
>> +{
>> +new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>> +p2m_add_iommu_flags(_entry, level,
>> +IOMMUF_readable|IOMMUF_writable);
>> +rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
>> +  level + 1);
>> +if ( rc )
>> +ASSERT_UNREACHABLE();
>> +}
>>  }
>>  else
>>  ASSERT(flags & _PAGE_PRESENT);
>>  
>> +if ( rc )
>> +{
>> +ASSERT(mfn_valid(mfn));
>> +p2m_free_ptp(p2m, mfn_to_page(mfn));
>> +return rc;
>> +}
>> +
> 
> I think the idiomatic way to deal with this would be to have a label at
> the end that everything jumps to something like the attached?  That way
> you don't have to spend mental effort making sure that nothing happens
> between the error and the clean-up call.

Well, as Roger has said already, this not being a classical
end-of-the-function error path, I did ask to get away without a
label. But you're the maintainer of the code, so if you want it
that way, then I guess I did misguide Roger.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code

2019-02-26 Thread Roger Pau Monné
On Mon, Feb 25, 2019 at 05:42:02PM +, George Dunlap wrote:
> On 2/21/19 4:50 PM, Roger Pau Monne wrote:
> > This is in preparation for also changing p2m_entry_modify to return an
> > error code.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Roger Pau Monné 
> > ---
> [snip]
> > diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> > index 04e9d81cf6..254c5dfd19 100644
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >  l1_pgentry_t *p2m_entry, new_entry;
> >  void *next;
> >  unsigned int flags;
> > +int rc;
> > +mfn_t mfn = INVALID_MFN;
> >  
> >  if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
> >shift, max)) )
> > @@ -194,7 +196,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >  /* PoD/paging: Not present doesn't imply empty. */
> >  if ( !flags )
> >  {
> > -mfn_t mfn = p2m_alloc_ptp(p2m, level);
> > +mfn = p2m_alloc_ptp(p2m, level);
> >  
> >  if ( mfn_eq(mfn, INVALID_MFN) )
> >  return -ENOMEM;
> > @@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >  new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> >  
> >  p2m_add_iommu_flags(_entry, level, 
> > IOMMUF_readable|IOMMUF_writable);
> > -p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 
> > 1);
> > +if ( rc )
> > +ASSERT_UNREACHABLE();
> >  }
> >  else if ( flags & _PAGE_PSE )
> >  {
> >  /* Split superpages pages into smaller ones. */
> >  unsigned long pfn = l1e_get_pfn(*p2m_entry);
> > -mfn_t mfn;
> >  l1_pgentry_t *l1_entry;
> >  unsigned int i;
> >  
> > @@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >  {
> >  new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * 
> > PAGETABLE_ORDER)),
> >   flags);
> > -p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> > +rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 
> > level);
> > +if ( rc )
> > +{
> > +ASSERT_UNREACHABLE();
> > +break;
> > +}
> >  }
> >  
> >  unmap_domain_page(l1_entry);
> >  
> > -new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> > -p2m_add_iommu_flags(_entry, level, 
> > IOMMUF_readable|IOMMUF_writable);
> > -p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +if ( !rc )
> > +{
> > +new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> > +p2m_add_iommu_flags(_entry, level,
> > +IOMMUF_readable|IOMMUF_writable);
> > +rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
> > +  level + 1);
> > +if ( rc )
> > +ASSERT_UNREACHABLE();
> > +}
> >  }
> >  else
> >  ASSERT(flags & _PAGE_PRESENT);
> >  
> > +if ( rc )
> > +{
> > +ASSERT(mfn_valid(mfn));
> > +p2m_free_ptp(p2m, mfn_to_page(mfn));
> > +return rc;
> > +}
> > +
> 
> I think the idiomatic way to deal with this would be to have a label at
> the end that everything jumps to something like the attached?  That way
> you don't have to spend mental effort making sure that nothing happens
> between the error and the clean-up call.

Jan explicitly asked to not use a label. I can change it, but I would
like to have consensus first.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code

2019-02-25 Thread George Dunlap
On 2/21/19 4:50 PM, Roger Pau Monne wrote:
> This is in preparation for also changing p2m_entry_modify to return an
> error code.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné 
> ---
[snip]
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 04e9d81cf6..254c5dfd19 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>  l1_pgentry_t *p2m_entry, new_entry;
>  void *next;
>  unsigned int flags;
> +int rc;
> +mfn_t mfn = INVALID_MFN;
>  
>  if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
>shift, max)) )
> @@ -194,7 +196,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>  /* PoD/paging: Not present doesn't imply empty. */
>  if ( !flags )
>  {
> -mfn_t mfn = p2m_alloc_ptp(p2m, level);
> +mfn = p2m_alloc_ptp(p2m, level);
>  
>  if ( mfn_eq(mfn, INVALID_MFN) )
>  return -ENOMEM;
> @@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>  new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>  
>  p2m_add_iommu_flags(_entry, level, 
> IOMMUF_readable|IOMMUF_writable);
> -p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +if ( rc )
> +ASSERT_UNREACHABLE();
>  }
>  else if ( flags & _PAGE_PSE )
>  {
>  /* Split superpages pages into smaller ones. */
>  unsigned long pfn = l1e_get_pfn(*p2m_entry);
> -mfn_t mfn;
>  l1_pgentry_t *l1_entry;
>  unsigned int i;
>  
> @@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>  {
>  new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * 
> PAGETABLE_ORDER)),
>   flags);
> -p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 
> level);
> +if ( rc )
> +{
> +ASSERT_UNREACHABLE();
> +break;
> +}
>  }
>  
>  unmap_domain_page(l1_entry);
>  
> -new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> -p2m_add_iommu_flags(_entry, level, 
> IOMMUF_readable|IOMMUF_writable);
> -p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +if ( !rc )
> +{
> +new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> +p2m_add_iommu_flags(_entry, level,
> +IOMMUF_readable|IOMMUF_writable);
> +rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
> +  level + 1);
> +if ( rc )
> +ASSERT_UNREACHABLE();
> +}
>  }
>  else
>  ASSERT(flags & _PAGE_PRESENT);
>  
> +if ( rc )
> +{
> +ASSERT(mfn_valid(mfn));
> +p2m_free_ptp(p2m, mfn_to_page(mfn));
> +return rc;
> +}
> +

I think the idiomatic way to deal with this would be to have a label at
the end that everything jumps to something like the attached?  That way
you don't have to spend mental effort making sure that nothing happens
between the error and the clean-up call.

Everything else looks good to me; thanks for doing this.

 -George
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 254c5dfd19..1eb22bd06e 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -206,7 +206,10 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
 p2m_add_iommu_flags(_entry, level, IOMMUF_readable|IOMMUF_writable);
 rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
 if ( rc )
+{
 ASSERT_UNREACHABLE();
+goto free_ptp;
+}
 }
 else if ( flags & _PAGE_PSE )
 {
@@ -257,39 +260,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
 if ( rc )
 {
 ASSERT_UNREACHABLE();
-break;
+unmap_domain_page(l1_entry);
+goto free_ptp;
 }
 }
 
 unmap_domain_page(l1_entry);
 
-if ( !rc )
-{
-new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
-p2m_add_iommu_flags(_entry, level,
-IOMMUF_readable|IOMMUF_writable);
-rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
-  level + 1);
-if ( rc )
-ASSERT_UNREACHABLE();
+new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
+p2m_add_iommu_flags(_entry, level,
+IOMMUF_readable|IOMMUF_writable);
+

Re: [Xen-devel] [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code

2019-02-22 Thread Jan Beulich
>>> On 21.02.19 at 17:50,  wrote:
> @@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>  new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>  
>  p2m_add_iommu_flags(_entry, level, 
> IOMMUF_readable|IOMMUF_writable);
> -p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +if ( rc )
> +ASSERT_UNREACHABLE();

Why not simply ASSERT(!rc)? Also further down then.

> --- a/xen/arch/x86/mm/shadow/none.c
> +++ b/xen/arch/x86/mm/shadow/none.c
> @@ -60,11 +60,12 @@ static void _update_paging_modes(struct vcpu *v)
>  ASSERT_UNREACHABLE();
>  }
>  
> -static void _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> - l1_pgentry_t *p, l1_pgentry_t new,
> - unsigned int level)
> +static int _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> +l1_pgentry_t *p, l1_pgentry_t new,
> +unsigned int level)
>  {
>  ASSERT_UNREACHABLE();
> +return -ENOSYS;

-EOPNOTSUPP or basically anything else, but not -ENOSYS please.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code

2019-02-21 Thread Wei Liu
On Thu, Feb 21, 2019 at 05:50:39PM +0100, Roger Pau Monne wrote:
[...]
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 04e9d81cf6..254c5dfd19 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>  l1_pgentry_t *p2m_entry, new_entry;
>  void *next;
>  unsigned int flags;
> +int rc;

Shouldn't rc be initialised to 0 here? There seems to be a path which
can leave rc uninitialised without calling write_p2m_entry.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code

2019-02-21 Thread Roger Pau Monne
This is in preparation for also changing p2m_entry_modify to return an
error code.

No functional change intended.

Signed-off-by: Roger Pau Monné 
---
Cc: George Dunlap 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Wei Liu 
Cc: Tim Deegan 
---
Changes since v4:
 - Handle errors in loops to avoid overwriting the variable
   containing the error code in non-debug builds.
 - Change error handling in p2m_next_level so it's done in a single
   place.

Changes since v3:
 - Use asserts instead of bugs to check return code from
   write_p2m_entry from callers that don't support or expect
   write_p2m_entry to fail.

Changes since v2:
 - New in this version.
---
 xen/arch/x86/mm/hap/hap.c|  4 +-
 xen/arch/x86/mm/hap/nested_hap.c |  4 +-
 xen/arch/x86/mm/p2m-pt.c | 83 ++--
 xen/arch/x86/mm/paging.c | 12 +++--
 xen/arch/x86/mm/shadow/common.c  |  4 +-
 xen/arch/x86/mm/shadow/none.c|  7 +--
 xen/arch/x86/mm/shadow/private.h |  6 +--
 xen/include/asm-x86/p2m.h|  4 +-
 xen/include/asm-x86/paging.h |  8 +--
 9 files changed, 97 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 2db7f2c04a..fdf77c59a5 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -708,7 +708,7 @@ static void hap_update_paging_modes(struct vcpu *v)
 put_gfn(d, cr3_gfn);
 }
 
-static void
+static int
 hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
 l1_pgentry_t new, unsigned int level)
 {
@@ -746,6 +746,8 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long 
gfn, l1_pgentry_t *p,
 
 if ( flush_nestedp2m )
 p2m_flush_nestedp2m(d);
+
+return 0;
 }
 
 static unsigned long hap_gva_to_gfn_real_mode(
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index d2a07a5c79..abe5958a52 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -71,7 +71,7 @@
 /*NESTED VIRT P2M FUNCTIONS */
 //
 
-void
+int
 nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
 {
@@ -87,6 +87,8 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned 
long gfn,
 flush_tlb_mask(p2m->dirty_cpumask);
 
 paging_unlock(d);
+
+return 0;
 }
 
 //
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 04e9d81cf6..254c5dfd19 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
 l1_pgentry_t *p2m_entry, new_entry;
 void *next;
 unsigned int flags;
+int rc;
+mfn_t mfn = INVALID_MFN;
 
 if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
   shift, max)) )
@@ -194,7 +196,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
 /* PoD/paging: Not present doesn't imply empty. */
 if ( !flags )
 {
-mfn_t mfn = p2m_alloc_ptp(p2m, level);
+mfn = p2m_alloc_ptp(p2m, level);
 
 if ( mfn_eq(mfn, INVALID_MFN) )
 return -ENOMEM;
@@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
 new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
 
 p2m_add_iommu_flags(_entry, level, 
IOMMUF_readable|IOMMUF_writable);
-p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+if ( rc )
+ASSERT_UNREACHABLE();
 }
 else if ( flags & _PAGE_PSE )
 {
 /* Split superpages pages into smaller ones. */
 unsigned long pfn = l1e_get_pfn(*p2m_entry);
-mfn_t mfn;
 l1_pgentry_t *l1_entry;
 unsigned int i;
 
@@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
 {
 new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * 
PAGETABLE_ORDER)),
  flags);
-p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 
level);
+if ( rc )
+{
+ASSERT_UNREACHABLE();
+break;
+}
 }
 
 unmap_domain_page(l1_entry);
 
-new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
-p2m_add_iommu_flags(_entry, level, 
IOMMUF_readable|IOMMUF_writable);
-p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+if ( !rc )
+{
+new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
+p2m_add_iommu_flags(_entry, level,
+IOMMUF_readable|IOMMUF_writable);
+rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry,