Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2018-04-18 Thread Jan Beulich
>>> On 06.12.17 at 08:50,  wrote:
> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the vcpu
> number constraint imposed by one IOREQ page, bump the number of IOREQ page to
> 4 pages. With this patch, multiple pages can be used as IOREQ page.

In case I didn't say so before - I'm opposed to simply changing the upper limit
here. Please make sure any number of vCPU-s can be supported by the new
code (it looks like it mostly does already, so it may be mainly the description
which needs changing). If we want to impose an upper limit, this shouldn't
affect the ioreq page handling code at all.

> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -64,14 +64,24 @@ static struct hvm_ioreq_server *get_ioreq_server(const 
> struct domain *d,
>  continue; \
>  else
>  
> +/* Iterate over all ioreq pages */
> +#define FOR_EACH_IOREQ_PAGE(s, i, iorp) \
> +for ( (i) = 0, iorp = s->ioreq; (i) < (s)->ioreq_page_nr; (i)++, iorp++ )

You're going too far with parenthesization here: Just like iorp, i is required 
to
be a simple identifier anyway. Otoh you parenthesize s only once.

> +static ioreq_t *get_ioreq_fallible(struct hvm_ioreq_server *s, struct vcpu 
> *v)

What is "fallible"? And why such a separate wrapper anyway? But I guess a lot
of this will need to change anyway with Paul's recent changes. I'm therefore not
going to look in close detail at any of this.

>  static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>  {
> -hvm_unmap_ioreq_gfn(s, >ioreq);
> +int i;

At the example of this - unsigned int please in all cases where the value can't 
go
negative.

> @@ -688,8 +741,15 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server 
> *s,
>  INIT_LIST_HEAD(>ioreq_vcpu_list);
>  spin_lock_init(>bufioreq_lock);
>  
> -s->ioreq.gfn = INVALID_GFN;
> +FOR_EACH_IOREQ_PAGE(s, i, iorp)
> +iorp->gfn = INVALID_GFN;
>  s->bufioreq.gfn = INVALID_GFN;
> +s->ioreq_page_nr = (d->max_vcpus + IOREQ_NUM_PER_PAGE - 1) /
> +   IOREQ_NUM_PER_PAGE;

DIV_ROUND_UP() - please don't open-code things.

> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -279,6 +279,12 @@
>  #define XEN_HVM_MCA_CAP_LMCE   (xen_mk_ullong(1) << 0)
>  #define XEN_HVM_MCA_CAP_MASK   XEN_HVM_MCA_CAP_LMCE
>  
> -#define HVM_NR_PARAMS 39
> +/*
> + * Number of pages that are reserved for default IOREQ server. The base PFN
> + * is set via HVM_PARAM_IOREQ_PFN.
> + */
> +#define HVM_PARAM_IOREQ_PAGES 39

Why is this needed? It can be derived from the vCPU count permitted for the
domain.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-15 Thread Paul Durrant
> -Original Message-
> From: Chao Gao [mailto:chao@intel.com]
> Sent: 15 December 2017 00:36
> To: Paul Durrant 
> Cc: Stefano Stabellini ; Wei Liu
> ; Andrew Cooper ; Tim
> (Xen.org) ; George Dunlap ;
> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
> 
> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
> 
> On Thu, Dec 14, 2017 at 02:50:17PM +, Paul Durrant wrote:
> >> -Original Message-
> >> >
> >> > Hmm. That looks like it is because the ioreq server pages are not owned
> by
> >> > the correct domain. The Xen patch series underwent some changes
> later in
> >> > review and I did not re-test my QEMU patch after that so I wonder if
> >> > mapping IOREQ pages has simply become broken. I'll investigate.
> >> >
> >>
> >> I have reproduced the problem locally now. Will try to figure out the bug
> >> tomorrow.
> >>
> >
> >Chao,
> >
> >  Can you try my new branch
> http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git;a=shortlog;h=refs
> /heads/ioreq24?
> >
> >  The problem was indeed that the ioreq pages were owned by the
> emulating domain rather than the target domain, which is no longer
> compatible with privcmd's use of HYPERVISOR_mmu_update.
> 
> Of course. I tested this branch. It works well.
> 
> But, I think your privcmd patch couldn't set 'err_ptr' to NULL when
> calling xen_remap_domain_mfn_array(). It works for the ioreq page is
> allocated right before the bufioreq page, and then they happen to be
> continuous.
> 

I'll have a look at that. The pages should not need to be contiguous MFNs for 
things to work. They will, by design, by mapped so that they are virtually 
contiguous. That's just a convenient way of getting pointers to the buf and 
synchronous structures in QEMU using only a single IOCTL to privcmd.

  Paul

> Thanks
> Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-14 Thread Chao Gao
On Thu, Dec 14, 2017 at 02:50:17PM +, Paul Durrant wrote:
>> -Original Message-
>> >
>> > Hmm. That looks like it is because the ioreq server pages are not owned by
>> > the correct domain. The Xen patch series underwent some changes later in
>> > review and I did not re-test my QEMU patch after that so I wonder if
>> > mapping IOREQ pages has simply become broken. I'll investigate.
>> >
>> 
>> I have reproduced the problem locally now. Will try to figure out the bug
>> tomorrow.
>> 
>
>Chao,
>
>  Can you try my new branch 
> http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git;a=shortlog;h=refs/heads/ioreq24?
>
>  The problem was indeed that the ioreq pages were owned by the emulating 
> domain rather than the target domain, which is no longer compatible with 
> privcmd's use of HYPERVISOR_mmu_update.

Of course. I tested this branch. It works well.

But, I think your privcmd patch couldn't set 'err_ptr' to NULL when
calling xen_remap_domain_mfn_array(). It works for the ioreq page is
allocated right before the bufioreq page, and then they happen to be
continuous.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-14 Thread Paul Durrant
> -Original Message-
> >
> > Hmm. That looks like it is because the ioreq server pages are not owned by
> > the correct domain. The Xen patch series underwent some changes later in
> > review and I did not re-test my QEMU patch after that so I wonder if
> > mapping IOREQ pages has simply become broken. I'll investigate.
> >
> 
> I have reproduced the problem locally now. Will try to figure out the bug
> tomorrow.
> 

Chao,

  Can you try my new branch 
http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git;a=shortlog;h=refs/heads/ioreq24?

  The problem was indeed that the ioreq pages were owned by the emulating 
domain rather than the target domain, which is no longer compatible with 
privcmd's use of HYPERVISOR_mmu_update.

  Cheers,

Paul
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-13 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
> Of Paul Durrant
> Sent: 13 December 2017 10:49
> To: 'Chao Gao' 
> Cc: Stefano Stabellini ; Wei Liu
> ; Andrew Cooper ; Tim
> (Xen.org) ; George Dunlap ;
> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
> 
> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
> IOREQ page to 4 pages
> 
> > -Original Message-
> > From: Chao Gao [mailto:chao@intel.com]
> > Sent: 12 December 2017 23:39
> > To: Paul Durrant 
> > Cc: Stefano Stabellini ; Wei Liu
> > ; Andrew Cooper ;
> Tim
> > (Xen.org) ; George Dunlap ;
> > xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
> > 
> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> > pages
> >
> > On Tue, Dec 12, 2017 at 09:07:46AM +, Paul Durrant wrote:
> > >> -Original Message-
> > >[snip]
> > >>
> > >> Hi, Paul.
> > >>
> > >> I merged the two qemu patches, the privcmd patch [1] and did some
> > tests.
> > >> I encountered a small issue and report it to you, so you can pay more
> > >> attention to it when doing some tests. The symptom is that using the
> new
> > >> interface to map grant table in xc_dom_gnttab_seed() always fails.
> After
> > >> adding some printk in privcmd, I found it is
> > >> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping
> > ioreq
> > >> server doesn't have such an issue.
> > >>
> > >> [1]
> > >>
> >
> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
> > >> 9a05e6712
> > >>
> > >
> > >Chao,
> > >
> > >  That privcmd patch is out of date. I've just pushed a new one:
> > >
> >
> >http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f
> > 00199f5f12cef401c6370c94a1140de9b318fc
> > >
> > >  Give that a try. I've been using it for a few weeks now.
> >
> > Mapping ioreq server always fails, while mapping grant table succeeds.
> >
> > QEMU fails with following log:
> > xenforeignmemory: error: ioctl failed: Device or resource busy
> > qemu-system-i386: failed to map ioreq server resources: error 16
> > handle=0x5614a6df5e00
> > qemu-system-i386: xen hardware virtual machine initialisation failed
> >
> > Xen encountered the following error:
> > (XEN) [13118.909787] mm.c:1003:d0v109 pg_owner d2 l1e_owner d0, but
> > real_pg_owner d0
> > (XEN) [13118.918122] mm.c:1079:d0v109 Error getting mfn 5da5841 (pfn
> > ) from L1 entry 805da5841227 for l1e_owner d0,
> pg_owner
> > d2
> 
> Hmm. That looks like it is because the ioreq server pages are not owned by
> the correct domain. The Xen patch series underwent some changes later in
> review and I did not re-test my QEMU patch after that so I wonder if
> mapping IOREQ pages has simply become broken. I'll investigate.
> 

I have reproduced the problem locally now. Will try to figure out the bug 
tomorrow.

  Paul

>   Paul
> 
> >
> > I only fixed some obvious issues with a patch to your privcmd patch:
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -181,7 +181,7 @@ int xen_remap_domain_gfn_range(struct
> > vm_area_struct *vma,
> > if (xen_feature(XENFEAT_auto_translated_physmap))
> > return -EOPNOTSUPP;
> >
> > -   return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, pages);
> > +   return do_remap_pfn(vma, addr, , nr, NULL, prot, domid, false,
> > pages
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range);
> >
> > @@ -200,8 +200,8 @@ int xen_remap_domain_gfn_array(struct
> > vm_area_struct *vma,
> >  * cause of "wrong memory was mapped in".
> >  */
> > BUG_ON(err_ptr == NULL);
> > -do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> > -false, pages);
> > +   return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> > +   false, pages);
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array);
> >
> > Thanks
> > Chao
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-13 Thread Paul Durrant
> -Original Message-
> From: Chao Gao [mailto:chao@intel.com]
> Sent: 12 December 2017 23:39
> To: Paul Durrant 
> Cc: Stefano Stabellini ; Wei Liu
> ; Andrew Cooper ; Tim
> (Xen.org) ; George Dunlap ;
> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
> 
> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
> 
> On Tue, Dec 12, 2017 at 09:07:46AM +, Paul Durrant wrote:
> >> -Original Message-
> >[snip]
> >>
> >> Hi, Paul.
> >>
> >> I merged the two qemu patches, the privcmd patch [1] and did some
> tests.
> >> I encountered a small issue and report it to you, so you can pay more
> >> attention to it when doing some tests. The symptom is that using the new
> >> interface to map grant table in xc_dom_gnttab_seed() always fails. After
> >> adding some printk in privcmd, I found it is
> >> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping
> ioreq
> >> server doesn't have such an issue.
> >>
> >> [1]
> >>
> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
> >> 9a05e6712
> >>
> >
> >Chao,
> >
> >  That privcmd patch is out of date. I've just pushed a new one:
> >
> >http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f
> 00199f5f12cef401c6370c94a1140de9b318fc
> >
> >  Give that a try. I've been using it for a few weeks now.
> 
> Mapping ioreq server always fails, while mapping grant table succeeds.
> 
> QEMU fails with following log:
> xenforeignmemory: error: ioctl failed: Device or resource busy
> qemu-system-i386: failed to map ioreq server resources: error 16
> handle=0x5614a6df5e00
> qemu-system-i386: xen hardware virtual machine initialisation failed
> 
> Xen encountered the following error:
> (XEN) [13118.909787] mm.c:1003:d0v109 pg_owner d2 l1e_owner d0, but
> real_pg_owner d0
> (XEN) [13118.918122] mm.c:1079:d0v109 Error getting mfn 5da5841 (pfn
> ) from L1 entry 805da5841227 for l1e_owner d0, pg_owner
> d2

Hmm. That looks like it is because the ioreq server pages are not owned by the 
correct domain. The Xen patch series underwent some changes later in review and 
I did not re-test my QEMU patch after that so I wonder if mapping IOREQ pages 
has simply become broken. I'll investigate.

  Paul

> 
> I only fixed some obvious issues with a patch to your privcmd patch:
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -181,7 +181,7 @@ int xen_remap_domain_gfn_range(struct
> vm_area_struct *vma,
> if (xen_feature(XENFEAT_auto_translated_physmap))
> return -EOPNOTSUPP;
> 
> -   return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, pages);
> +   return do_remap_pfn(vma, addr, , nr, NULL, prot, domid, false,
> pages
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range);
> 
> @@ -200,8 +200,8 @@ int xen_remap_domain_gfn_array(struct
> vm_area_struct *vma,
>  * cause of "wrong memory was mapped in".
>  */
> BUG_ON(err_ptr == NULL);
> -do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> -false, pages);
> +   return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> +   false, pages);
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array);
> 
> Thanks
> Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-12 Thread Chao Gao
On Tue, Dec 12, 2017 at 09:07:46AM +, Paul Durrant wrote:
>> -Original Message-
>[snip]
>> 
>> Hi, Paul.
>> 
>> I merged the two qemu patches, the privcmd patch [1] and did some tests.
>> I encountered a small issue and report it to you, so you can pay more
>> attention to it when doing some tests. The symptom is that using the new
>> interface to map grant table in xc_dom_gnttab_seed() always fails. After
>> adding some printk in privcmd, I found it is
>> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping ioreq
>> server doesn't have such an issue.
>> 
>> [1]
>> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
>> 9a05e6712
>> 
>
>Chao,
>
>  That privcmd patch is out of date. I've just pushed a new one:
>
>http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f00199f5f12cef401c6370c94a1140de9b318fc
>
>  Give that a try. I've been using it for a few weeks now.

Mapping ioreq server always fails, while mapping grant table succeeds.

QEMU fails with following log:
xenforeignmemory: error: ioctl failed: Device or resource busy
qemu-system-i386: failed to map ioreq server resources: error 16
handle=0x5614a6df5e00
qemu-system-i386: xen hardware virtual machine initialisation failed

Xen encountered the following error:
(XEN) [13118.909787] mm.c:1003:d0v109 pg_owner d2 l1e_owner d0, but 
real_pg_owner d0
(XEN) [13118.918122] mm.c:1079:d0v109 Error getting mfn 5da5841 (pfn 
) from L1 entry 805da5841227 for l1e_owner d0, pg_owner d2

I only fixed some obvious issues with a patch to your privcmd patch:
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -181,7 +181,7 @@ int xen_remap_domain_gfn_range(struct vm_area_struct *vma,
if (xen_feature(XENFEAT_auto_translated_physmap))
return -EOPNOTSUPP;
 
-   return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, pages);
+   return do_remap_pfn(vma, addr, , nr, NULL, prot, domid, false, pages
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range);
 
@@ -200,8 +200,8 @@ int xen_remap_domain_gfn_array(struct vm_area_struct *vma,
 * cause of "wrong memory was mapped in".
 */
BUG_ON(err_ptr == NULL);
-do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
-false, pages);
+   return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
+   false, pages);
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array);

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-12 Thread Paul Durrant
> -Original Message-
[snip]
> 
> Hi, Paul.
> 
> I merged the two qemu patches, the privcmd patch [1] and did some tests.
> I encountered a small issue and report it to you, so you can pay more
> attention to it when doing some tests. The symptom is that using the new
> interface to map grant table in xc_dom_gnttab_seed() always fails. After
> adding some printk in privcmd, I found it is
> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping ioreq
> server doesn't have such an issue.
> 
> [1]
> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
> 9a05e6712
> 

Chao,

  That privcmd patch is out of date. I've just pushed a new one:

http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f00199f5f12cef401c6370c94a1140de9b318fc

  Give that a try. I've been using it for a few weeks now.

  Cheers,

Paul

> Thanks
> Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-12 Thread Chao Gao
On Fri, Dec 08, 2017 at 11:06:43AM +, Paul Durrant wrote:
>> -Original Message-
>> From: Chao Gao [mailto:chao@intel.com]
>> Sent: 07 December 2017 06:57
>> To: Paul Durrant 
>> Cc: Stefano Stabellini ; Wei Liu
>> ; Andrew Cooper ; Tim
>> (Xen.org) ; George Dunlap ;
>> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
>> 
>> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
>> pages
>> 
>> On Thu, Dec 07, 2017 at 08:41:14AM +, Paul Durrant wrote:
>> >> -Original Message-
>> >> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On
>> Behalf
>> >> Of Paul Durrant
>> >> Sent: 06 December 2017 16:10
>> >> To: 'Chao Gao' 
>> >> Cc: Stefano Stabellini ; Wei Liu
>> >> ; Andrew Cooper ;
>> Tim
>> >> (Xen.org) ; George Dunlap ;
>> >> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
>> >> 
>> >> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
>> >> IOREQ page to 4 pages
>> >>
>> >> > -Original Message-
>> >> > From: Chao Gao [mailto:chao@intel.com]
>> >> > Sent: 06 December 2017 09:02
>> >> > To: Paul Durrant 
>> >> > Cc: xen-de...@lists.xen.org; Tim (Xen.org) ; Stefano
>> >> > Stabellini ; Konrad Rzeszutek Wilk
>> >> > ; Jan Beulich ; George
>> >> > Dunlap ; Andrew Cooper
>> >> > ; Wei Liu ; Ian
>> Jackson
>> >> > 
>> >> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page
>> to 4
>> >> > pages
>> >> >
>> >> > On Wed, Dec 06, 2017 at 03:04:11PM +, Paul Durrant wrote:
>> >> > >> -Original Message-
>> >> > >> From: Chao Gao [mailto:chao@intel.com]
>> >> > >> Sent: 06 December 2017 07:50
>> >> > >> To: xen-de...@lists.xen.org
>> >> > >> Cc: Chao Gao ; Paul Durrant
>> >> > >> ; Tim (Xen.org) ; Stefano
>> >> > Stabellini
>> >> > >> ; Konrad Rzeszutek Wilk
>> >> > >> ; Jan Beulich ;
>> George
>> >> > >> Dunlap ; Andrew Cooper
>> >> > >> ; Wei Liu ; Ian
>> >> > Jackson
>> >> > >> 
>> >> > >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page
>> to 4
>> >> > >> pages
>> >> > >>
>> >> > >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove
>> the
>> >> > vcpu
>> >> > >> number constraint imposed by one IOREQ page, bump the number
>> of
>> >> > IOREQ
>> >> > >> page to
>> >> > >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
>> >> > >>
>> >> > >> Basically, this patch extends 'ioreq' field in struct 
>> >> > >> hvm_ioreq_server
>> to
>> >> an
>> >> > >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced 
>> >> > >> with
>> >> > >> FOR_EACH_IOREQ_PAGE macro.
>> >> > >>
>> >> > >> In order to access an IOREQ page, QEMU should get the gmfn and
>> map
>> >> > this
>> >> > >> gmfn
>> >> > >> to its virtual address space.
>> >> > >
>> >> > >No. There's no need to extend the 'legacy' mechanism of using magic
>> >> page
>> >> > gfns. You should only handle the case where the mfns are allocated on
>> >> > demand (see the call to hvm_ioreq_server_alloc_pages() in
>> >> > hvm_get_ioreq_server_frame()). The number of guest vcpus is known
>> at
>> >> > this point so the correct number of pages can be allocated. If the 
>> >> > creator
>> of
>> >> > the ioreq server attempts to use the legacy
>> hvm_get_ioreq_server_info()
>> >> > and the guest has >128 vcpus then the call should fail.
>> >> >
>> >> > Great suggestion. I will introduce a new dmop, a variant of
>> >> > hvm_get_ioreq_server_frame() for creator to get an array of gfns and
>> the
>> >> > size of array. And the legacy interface will report an error if more
>> >> > than one IOREQ PAGES are needed.
>> >>
>> >> You don't need a new dmop for mapping I think. The mem op to map
>> ioreq
>> >> server frames should work. All you should need to do is update
>> >> hvm_get_ioreq_server_frame() to deal with an index > 1, and provide
>> some
>> >> means for the ioreq server creator to convert the number of guest vcpus
>> into
>> >> the correct number of pages to map. (That might need a new dm op).
>> >
>> >I realise after saying this that an emulator already knows the size of the
>> ioreq structure and so can easily calculate the correct number of pages to
>> map, given the number of guest vcpus.
>> 
>> How about the 

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-07 Thread Chao Gao
On Thu, Dec 07, 2017 at 08:41:14AM +, Paul Durrant wrote:
>> -Original Message-
>> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
>> Of Paul Durrant
>> Sent: 06 December 2017 16:10
>> To: 'Chao Gao' 
>> Cc: Stefano Stabellini ; Wei Liu
>> ; Andrew Cooper ; Tim
>> (Xen.org) ; George Dunlap ;
>> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
>> 
>> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
>> IOREQ page to 4 pages
>> 
>> > -Original Message-
>> > From: Chao Gao [mailto:chao@intel.com]
>> > Sent: 06 December 2017 09:02
>> > To: Paul Durrant 
>> > Cc: xen-de...@lists.xen.org; Tim (Xen.org) ; Stefano
>> > Stabellini ; Konrad Rzeszutek Wilk
>> > ; Jan Beulich ; George
>> > Dunlap ; Andrew Cooper
>> > ; Wei Liu ; Ian Jackson
>> > 
>> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
>> > pages
>> >
>> > On Wed, Dec 06, 2017 at 03:04:11PM +, Paul Durrant wrote:
>> > >> -Original Message-
>> > >> From: Chao Gao [mailto:chao@intel.com]
>> > >> Sent: 06 December 2017 07:50
>> > >> To: xen-de...@lists.xen.org
>> > >> Cc: Chao Gao ; Paul Durrant
>> > >> ; Tim (Xen.org) ; Stefano
>> > Stabellini
>> > >> ; Konrad Rzeszutek Wilk
>> > >> ; Jan Beulich ; George
>> > >> Dunlap ; Andrew Cooper
>> > >> ; Wei Liu ; Ian
>> > Jackson
>> > >> 
>> > >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
>> > >> pages
>> > >>
>> > >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the
>> > vcpu
>> > >> number constraint imposed by one IOREQ page, bump the number of
>> > IOREQ
>> > >> page to
>> > >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
>> > >>
>> > >> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server 
>> > >> to
>> an
>> > >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced 
>> > >> with
>> > >> FOR_EACH_IOREQ_PAGE macro.
>> > >>
>> > >> In order to access an IOREQ page, QEMU should get the gmfn and map
>> > this
>> > >> gmfn
>> > >> to its virtual address space.
>> > >
>> > >No. There's no need to extend the 'legacy' mechanism of using magic
>> page
>> > gfns. You should only handle the case where the mfns are allocated on
>> > demand (see the call to hvm_ioreq_server_alloc_pages() in
>> > hvm_get_ioreq_server_frame()). The number of guest vcpus is known at
>> > this point so the correct number of pages can be allocated. If the creator 
>> > of
>> > the ioreq server attempts to use the legacy hvm_get_ioreq_server_info()
>> > and the guest has >128 vcpus then the call should fail.
>> >
>> > Great suggestion. I will introduce a new dmop, a variant of
>> > hvm_get_ioreq_server_frame() for creator to get an array of gfns and the
>> > size of array. And the legacy interface will report an error if more
>> > than one IOREQ PAGES are needed.
>> 
>> You don't need a new dmop for mapping I think. The mem op to map ioreq
>> server frames should work. All you should need to do is update
>> hvm_get_ioreq_server_frame() to deal with an index > 1, and provide some
>> means for the ioreq server creator to convert the number of guest vcpus into
>> the correct number of pages to map. (That might need a new dm op).
>
>I realise after saying this that an emulator already knows the size of the 
>ioreq structure and so can easily calculate the correct number of pages to 
>map, given the number of guest vcpus.

How about the patch in the bottom? Is it in the right direction?
Do you have the QEMU patch, which replaces the old method with the new method
to set up mapping? I want to integrate that patch and do some tests.

Thanks
Chao

From 44919e1e80f36981d6e213f74302c8c89cc9f828 Mon Sep 17 00:00:00 2001
From: Chao Gao 
Date: Tue, 5 Dec 2017 14:20:24 +0800
Subject: [PATCH] ioreq: add support of multiple ioreq pages

Each vcpu should have an corresponding 'ioreq_t' structure in the ioreq page.
Currently, only one 4K-byte page is used as ioreq page. Thus it also limits
the number of vcpu to 128 if device model is in use.

This patch changes 'ioreq' field to an array. At most, 4 pages can be used.
When creating IO server, the actual number of ioreq page is calculated
according to the number of vcpus. All ioreq pages are allocated on demand.
The creator should provide enough number of gfn to set up the mapping.

For 

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-07 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
> Of Paul Durrant
> Sent: 06 December 2017 16:10
> To: 'Chao Gao' 
> Cc: Stefano Stabellini ; Wei Liu
> ; Andrew Cooper ; Tim
> (Xen.org) ; George Dunlap ;
> xen-de...@lists.xen.org; Jan Beulich ; Ian Jackson
> 
> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
> IOREQ page to 4 pages
> 
> > -Original Message-
> > From: Chao Gao [mailto:chao@intel.com]
> > Sent: 06 December 2017 09:02
> > To: Paul Durrant 
> > Cc: xen-de...@lists.xen.org; Tim (Xen.org) ; Stefano
> > Stabellini ; Konrad Rzeszutek Wilk
> > ; Jan Beulich ; George
> > Dunlap ; Andrew Cooper
> > ; Wei Liu ; Ian Jackson
> > 
> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> > pages
> >
> > On Wed, Dec 06, 2017 at 03:04:11PM +, Paul Durrant wrote:
> > >> -Original Message-
> > >> From: Chao Gao [mailto:chao@intel.com]
> > >> Sent: 06 December 2017 07:50
> > >> To: xen-de...@lists.xen.org
> > >> Cc: Chao Gao ; Paul Durrant
> > >> ; Tim (Xen.org) ; Stefano
> > Stabellini
> > >> ; Konrad Rzeszutek Wilk
> > >> ; Jan Beulich ; George
> > >> Dunlap ; Andrew Cooper
> > >> ; Wei Liu ; Ian
> > Jackson
> > >> 
> > >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> > >> pages
> > >>
> > >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the
> > vcpu
> > >> number constraint imposed by one IOREQ page, bump the number of
> > IOREQ
> > >> page to
> > >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
> > >>
> > >> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server to
> an
> > >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced with
> > >> FOR_EACH_IOREQ_PAGE macro.
> > >>
> > >> In order to access an IOREQ page, QEMU should get the gmfn and map
> > this
> > >> gmfn
> > >> to its virtual address space.
> > >
> > >No. There's no need to extend the 'legacy' mechanism of using magic
> page
> > gfns. You should only handle the case where the mfns are allocated on
> > demand (see the call to hvm_ioreq_server_alloc_pages() in
> > hvm_get_ioreq_server_frame()). The number of guest vcpus is known at
> > this point so the correct number of pages can be allocated. If the creator 
> > of
> > the ioreq server attempts to use the legacy hvm_get_ioreq_server_info()
> > and the guest has >128 vcpus then the call should fail.
> >
> > Great suggestion. I will introduce a new dmop, a variant of
> > hvm_get_ioreq_server_frame() for creator to get an array of gfns and the
> > size of array. And the legacy interface will report an error if more
> > than one IOREQ PAGES are needed.
> 
> You don't need a new dmop for mapping I think. The mem op to map ioreq
> server frames should work. All you should need to do is update
> hvm_get_ioreq_server_frame() to deal with an index > 1, and provide some
> means for the ioreq server creator to convert the number of guest vcpus into
> the correct number of pages to map. (That might need a new dm op).

I realise after saying this that an emulator already knows the size of the 
ioreq structure and so can easily calculate the correct number of pages to map, 
given the number of guest vcpus.

  Paul

> 
>   Paul
> 
> >
> > Thanks
> > Chao
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-06 Thread Paul Durrant
> -Original Message-
> From: Chao Gao [mailto:chao@intel.com]
> Sent: 06 December 2017 09:02
> To: Paul Durrant 
> Cc: xen-de...@lists.xen.org; Tim (Xen.org) ; Stefano
> Stabellini ; Konrad Rzeszutek Wilk
> ; Jan Beulich ; George
> Dunlap ; Andrew Cooper
> ; Wei Liu ; Ian Jackson
> 
> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
> 
> On Wed, Dec 06, 2017 at 03:04:11PM +, Paul Durrant wrote:
> >> -Original Message-
> >> From: Chao Gao [mailto:chao@intel.com]
> >> Sent: 06 December 2017 07:50
> >> To: xen-de...@lists.xen.org
> >> Cc: Chao Gao ; Paul Durrant
> >> ; Tim (Xen.org) ; Stefano
> Stabellini
> >> ; Konrad Rzeszutek Wilk
> >> ; Jan Beulich ; George
> >> Dunlap ; Andrew Cooper
> >> ; Wei Liu ; Ian
> Jackson
> >> 
> >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> >> pages
> >>
> >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the
> vcpu
> >> number constraint imposed by one IOREQ page, bump the number of
> IOREQ
> >> page to
> >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
> >>
> >> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server to 
> >> an
> >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced with
> >> FOR_EACH_IOREQ_PAGE macro.
> >>
> >> In order to access an IOREQ page, QEMU should get the gmfn and map
> this
> >> gmfn
> >> to its virtual address space.
> >
> >No. There's no need to extend the 'legacy' mechanism of using magic page
> gfns. You should only handle the case where the mfns are allocated on
> demand (see the call to hvm_ioreq_server_alloc_pages() in
> hvm_get_ioreq_server_frame()). The number of guest vcpus is known at
> this point so the correct number of pages can be allocated. If the creator of
> the ioreq server attempts to use the legacy hvm_get_ioreq_server_info()
> and the guest has >128 vcpus then the call should fail.
> 
> Great suggestion. I will introduce a new dmop, a variant of
> hvm_get_ioreq_server_frame() for creator to get an array of gfns and the
> size of array. And the legacy interface will report an error if more
> than one IOREQ PAGES are needed.

You don't need a new dmop for mapping I think. The mem op to map ioreq server 
frames should work. All you should need to do is update 
hvm_get_ioreq_server_frame() to deal with an index > 1, and provide some means 
for the ioreq server creator to convert the number of guest vcpus into the 
correct number of pages to map. (That might need a new dm op).

  Paul

> 
> Thanks
> Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-06 Thread Chao Gao
On Wed, Dec 06, 2017 at 03:04:11PM +, Paul Durrant wrote:
>> -Original Message-
>> From: Chao Gao [mailto:chao@intel.com]
>> Sent: 06 December 2017 07:50
>> To: xen-de...@lists.xen.org
>> Cc: Chao Gao ; Paul Durrant
>> ; Tim (Xen.org) ; Stefano Stabellini
>> ; Konrad Rzeszutek Wilk
>> ; Jan Beulich ; George
>> Dunlap ; Andrew Cooper
>> ; Wei Liu ; Ian Jackson
>> 
>> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
>> pages
>> 
>> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the vcpu
>> number constraint imposed by one IOREQ page, bump the number of IOREQ
>> page to
>> 4 pages. With this patch, multiple pages can be used as IOREQ page.
>> 
>> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server to an
>> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced with
>> FOR_EACH_IOREQ_PAGE macro.
>> 
>> In order to access an IOREQ page, QEMU should get the gmfn and map this
>> gmfn
>> to its virtual address space.
>
>No. There's no need to extend the 'legacy' mechanism of using magic page gfns. 
>You should only handle the case where the mfns are allocated on demand (see 
>the call to hvm_ioreq_server_alloc_pages() in hvm_get_ioreq_server_frame()). 
>The number of guest vcpus is known at this point so the correct number of 
>pages can be allocated. If the creator of the ioreq server attempts to use the 
>legacy hvm_get_ioreq_server_info() and the guest has >128 vcpus then the call 
>should fail.

Great suggestion. I will introduce a new dmop, a variant of
hvm_get_ioreq_server_frame() for creator to get an array of gfns and the
size of array. And the legacy interface will report an error if more
than one IOREQ PAGES are needed.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages

2017-12-06 Thread Paul Durrant
> -Original Message-
> From: Chao Gao [mailto:chao@intel.com]
> Sent: 06 December 2017 07:50
> To: xen-de...@lists.xen.org
> Cc: Chao Gao ; Paul Durrant
> ; Tim (Xen.org) ; Stefano Stabellini
> ; Konrad Rzeszutek Wilk
> ; Jan Beulich ; George
> Dunlap ; Andrew Cooper
> ; Wei Liu ; Ian Jackson
> 
> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
> 
> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the vcpu
> number constraint imposed by one IOREQ page, bump the number of IOREQ
> page to
> 4 pages. With this patch, multiple pages can be used as IOREQ page.
> 
> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server to an
> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced with
> FOR_EACH_IOREQ_PAGE macro.
> 
> In order to access an IOREQ page, QEMU should get the gmfn and map this
> gmfn
> to its virtual address space.

No. There's no need to extend the 'legacy' mechanism of using magic page gfns. 
You should only handle the case where the mfns are allocated on demand (see the 
call to hvm_ioreq_server_alloc_pages() in hvm_get_ioreq_server_frame()). The 
number of guest vcpus is known at this point so the correct number of pages can 
be allocated. If the creator of the ioreq server attempts to use the legacy 
hvm_get_ioreq_server_info() and the guest has >128 vcpus then the call should 
fail.

  Paul

> Now there are several pages, to be compatible
> with previous QEMU, the interface to get the gmfn doesn't change. But
> newer
> QEMU needs to get the gmfn repeatly until a same gmfn is found. To
> implement
> this, an internal index is introduced: when QEMU queries the gmfn, the gmfn
> of
> IOREQ page referenced by the index is returned.  After each operation, the
> index increases by 1 and rewinds when it overflows.
> 
> Signed-off-by: Chao Gao 
> ---
> v4:
>  - new
> ---
>  tools/libxc/include/xc_dom.h |   2 +-
>  tools/libxc/xc_dom_x86.c |   6 +-
>  xen/arch/x86/hvm/hvm.c   |   1 +
>  xen/arch/x86/hvm/ioreq.c | 116
> ++-
>  xen/include/asm-x86/hvm/domain.h |   6 +-
>  xen/include/public/hvm/ioreq.h   |   2 +
>  xen/include/public/hvm/params.h  |   8 ++-
>  7 files changed, 110 insertions(+), 31 deletions(-)
> 
> diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h
> index 45c9d67..2f8b412 100644
> --- a/tools/libxc/include/xc_dom.h
> +++ b/tools/libxc/include/xc_dom.h
> @@ -20,7 +20,7 @@
>  #include 
> 
>  #define INVALID_PFN ((xen_pfn_t)-1)
> -#define X86_HVM_NR_SPECIAL_PAGES8
> +#define X86_HVM_NR_SPECIAL_PAGES11
>  #define X86_HVM_END_SPECIAL_REGION  0xff000u
> 
>  /* --- typedefs and structs  */
> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> index bff68a0..b316ebc 100644
> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> @@ -32,6 +32,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
> 
> @@ -57,8 +58,8 @@
>  #define SPECIALPAGE_BUFIOREQ 3
>  #define SPECIALPAGE_XENSTORE 4
>  #define SPECIALPAGE_IOREQ5
> -#define SPECIALPAGE_IDENT_PT 6
> -#define SPECIALPAGE_CONSOLE  7
> +#define SPECIALPAGE_IDENT_PT (5 + MAX_IOREQ_PAGE)
> +#define SPECIALPAGE_CONSOLE  (SPECIALPAGE_IDENT_PT + 1)
>  #define special_pfn(x) \
>  (X86_HVM_END_SPECIAL_REGION - X86_HVM_NR_SPECIAL_PAGES + (x))
> 
> @@ -612,6 +613,7 @@ static int alloc_magic_pages_hvm(struct
> xc_dom_image *dom)
> X86_HVM_NR_SPECIAL_PAGES) )
>  goto error_out;
> 
> +xc_hvm_param_set(xch, domid, HVM_PARAM_IOREQ_PAGES,
> MAX_IOREQ_PAGE);
>  xc_hvm_param_set(xch, domid, HVM_PARAM_STORE_PFN,
>   special_pfn(SPECIALPAGE_XENSTORE));
>  xc_hvm_param_set(xch, domid, HVM_PARAM_BUFIOREQ_PFN,
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 5d06767..0b3bd04 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4077,6 +4077,7 @@ static int hvm_allow_set_param(struct domain *d,
>  case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
>  case HVM_PARAM_ALTP2M:
>  case HVM_PARAM_MCA_CAP:
> +case HVM_PARAM_IOREQ_PAGES:
>  if ( value != 0 && a->value != value )
>  rc = -EEXIST;
>  break;
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index a879f20..0a36001 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -64,14 +64,24 @@ static struct hvm_ioreq_server
> *get_ioreq_server(const struct domain *d,
>  continue; \
>  else
> 
> +/* Iterate over all ioreq pages */
> +#define FOR_EACH_IOREQ_PAGE(s, i, iorp) \
> +for ( (i) =