Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-05 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 05 January 2018 10:29
> To: Paul Durrant <paul.durr...@citrix.com>
> Cc: JulienGrall <julien.gr...@arm.com>; Andrew Cooper
> <andrew.coop...@citrix.com>; George Dunlap
> <george.dun...@citrix.com>; Ian Jackson <ian.jack...@citrix.com>; Wei Liu
> <wei.l...@citrix.com>; StefanoStabellini <sstabell...@kernel.org>; xen-
> de...@lists.xenproject.org; Tim (Xen.org) <t...@xen.org>
> Subject: RE: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> >>> On 05.01.18 at 11:16, <paul.durr...@citrix.com> wrote:
> >> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On
> Behalf
> >> Of Paul Durrant
> >> Sent: 03 January 2018 16:48
> >> Ok, thanks. If change back to having the pages owned by the tools domain
> >> then I guess this will all be avoided anyway.
> >
> > I've run into a problem this this, but it may be easily soluable...
> >
> > If I pass back the domid of the resource page owner and that owner is the
> > tools domain, then when the tools domain attempts the mmu_update
> hypercall it
> > fails because it has passed its own domid to mmu_update. The failure is
> > caused by a check in get_pg_owner() which errors own if the passed in
> domid
> > == curr->domain_id but, strangely, not if domid == DOMID_SELF. Any idea
> why
> > this check is there? To me it looks like it should be safe to specify
> > curr->domain_id and have get_pg_owner() simply behave as if
> DOMID_SELF was
> > passed.
> 
> A little while there was some discussion on this general topic (sadly
> I don't recall the context), and iirc it was in particular Andrew (or
> George?) who thought it should be the other way around: If
> DOMID_SELF can be used, the actual domain ID should not be
> accepted (which iirc is currently the case in some places, but not
> in others). But ...
> 
> > The alternative would be to have the acquire_resource hypercall do the
> check
> > and pass back DOMID_SELF is the ioreq server dm domain happens to
> match
> > currd->domain_id, but that seems a bit icky.
> 
> ... this wasn't the plan anyway. Instead we had talked of the
> hypercall returning just a boolean indicator, to distinguish
> self-owned pages from target-domain-owned ones. The
> caller is supposed to know the domain ID of the target domain,
> after all.

Ah, ok. If it's only necessary to distinguish between self-owned and 
target-owned then that will be fine. My current test series just makes the 
domid an IN/OUT parameter and re-writes it if necessary. I'll switch to using a 
flag to avoid the issue as you suggest.

  Paul

> 
> Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-05 Thread Jan Beulich
>>> On 05.01.18 at 11:16,  wrote:
>> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
>> Of Paul Durrant
>> Sent: 03 January 2018 16:48
>> Ok, thanks. If change back to having the pages owned by the tools domain
>> then I guess this will all be avoided anyway.
> 
> I've run into a problem this this, but it may be easily soluable...
> 
> If I pass back the domid of the resource page owner and that owner is the 
> tools domain, then when the tools domain attempts the mmu_update hypercall it 
> fails because it has passed its own domid to mmu_update. The failure is 
> caused by a check in get_pg_owner() which errors own if the passed in domid 
> == curr->domain_id but, strangely, not if domid == DOMID_SELF. Any idea why 
> this check is there? To me it looks like it should be safe to specify 
> curr->domain_id and have get_pg_owner() simply behave as if DOMID_SELF was 
> passed.

A little while there was some discussion on this general topic (sadly
I don't recall the context), and iirc it was in particular Andrew (or
George?) who thought it should be the other way around: If
DOMID_SELF can be used, the actual domain ID should not be
accepted (which iirc is currently the case in some places, but not
in others). But ...

> The alternative would be to have the acquire_resource hypercall do the check 
> and pass back DOMID_SELF is the ioreq server dm domain happens to match 
> currd->domain_id, but that seems a bit icky.

... this wasn't the plan anyway. Instead we had talked of the
hypercall returning just a boolean indicator, to distinguish
self-owned pages from target-domain-owned ones. The
caller is supposed to know the domain ID of the target domain,
after all.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-05 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
> Of Paul Durrant
> Sent: 03 January 2018 16:48
> To: 'Jan Beulich' <jbeul...@suse.com>
> Cc: StefanoStabellini <sstabell...@kernel.org>; Wei Liu
> <wei.l...@citrix.com>; Andrew Cooper <andrew.coop...@citrix.com>; Tim
> (Xen.org) <t...@xen.org>; George Dunlap <george.dun...@citrix.com>;
> JulienGrall <julien.gr...@arm.com>; Ian Jackson <ian.jack...@citrix.com>;
> xen-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> > -Original Message-
> > From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On
> Behalf
> > Of Jan Beulich
> > Sent: 03 January 2018 16:41
> > To: Paul Durrant <paul.durr...@citrix.com>
> > Cc: StefanoStabellini <sstabell...@kernel.org>; Wei Liu
> > <wei.l...@citrix.com>; Andrew Cooper <andrew.coop...@citrix.com>;
> Tim
> > (Xen.org) <t...@xen.org>; George Dunlap <george.dun...@citrix.com>;
> > JulienGrall <julien.gr...@arm.com>; xen-devel@lists.xenproject.org; Ian
> > Jackson <ian.jack...@citrix.com>
> > Subject: Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> > mappable resource type...
> >
> > >>> On 03.01.18 at 17:06, <paul.durr...@citrix.com> wrote:
> > >>  -Original Message-
> > >> From: Jan Beulich [mailto:jbeul...@suse.com]
> > >> Sent: 03 January 2018 15:48
> > >> To: Paul Durrant <paul.durr...@citrix.com>
> > >> Cc: JulienGrall <julien.gr...@arm.com>; Andrew Cooper
> > >> <andrew.coop...@citrix.com>; Wei Liu <wei.l...@citrix.com>; George
> > >> Dunlap <george.dun...@citrix.com>; Ian Jackson
> > <ian.jack...@citrix.com>;
> > >> Stefano Stabellini <sstabell...@kernel.org>; xen-
> > de...@lists.xenproject.org;
> > >> Konrad Rzeszutek Wilk <konrad.w...@oracle.com>; Tim (Xen.org)
> > >> <t...@xen.org>
> > >> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
> > >> resource type...
> > >>
> > >> >>> On 03.01.18 at 13:19, <paul.durr...@citrix.com> wrote:
> > >> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool
> > buf)
> > >> > +{
> > >> > +struct domain *d = s->domain;
> > >> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
> > >> > +
> > >> > +if ( !iorp->page )
> > >> > +return;
> > >> > +
> > >> > +page_list_add_tail(iorp->page, 
> > >> >arch.hvm_domain.ioreq_server.pages);
> > >>
> > >> Afaict s->domain is the guest, not the domain containing the
> > >> emulator. Hence this new model of freeing the pages is safe only
> > >> when the emulator domain is dead by the time the guest is being
> > >> cleaned up.
> > >
> > > From the investigations done w.r.t. the grant table pages I don't think 
> > > this
> > > is the case. The emulating domain will have references on the pages and
> > this
> > > keeps the target domain in existence, only completing domain
> destruction
> > when
> > > the references are finally dropped. I've tested this by leaving an 
> > > emulator
> > > running whilst I 'xl destroy' the domain; the domain remains as a zombie
> > > until emulator terminates.
> >
> > Oh, right, I forgot about that aspect.
> >
> > >> What is additionally confusing me is the page ownership: Wasn't
> > >> the (original) intention to make the pages owned by the emulator
> > >> domain rather than the guest? I seem to recall you referring to
> > >> restrictions in do_mmu_update(), but a domain should always be
> > >> able to map pages it owns, shouldn't it?
> > >
> > > I'm sure we had this discussion before. I am trying to make resource
> > mapping
> > > as uniform as possible so, like the grant table pages, the ioreq server
> pages
> > > are assigned to the target domain. Otherwise the domain trying to map
> > > resources has know which actual domain they are assigned to, rather
> than
> > the
> > > domain they relate to... which is pretty ugly.
> >
> > Didn't I suggest a slight change to the interface to actually make
> > this not as ug

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-04 Thread Jan Beulich
>>> On 04.01.18 at 11:49,  wrote:
>>  -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 04 January 2018 10:47
>> To: Paul Durrant 
>> Cc: JulienGrall ; Andrew Cooper
>> ; George Dunlap
>> ; Ian Jackson ; Wei Liu
>> ; StefanoStabellini ; xen-
>> de...@lists.xenproject.org; Konrad Rzeszutek Wilk
>> ; Tim (Xen.org) 
>> Subject: RE: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
>> resource type...
>> 
>> >>> On 03.01.18 at 17:06,  wrote:
>> >>  -Original Message-
>> >> From: Jan Beulich [mailto:jbeul...@suse.com]
>> >> Sent: 03 January 2018 15:48
>> >> To: Paul Durrant 
>> >> Cc: JulienGrall ; Andrew Cooper
>> >> ; Wei Liu ; George
>> >> Dunlap ; Ian Jackson
>> ;
>> >> Stefano Stabellini ; xen-
>> de...@lists.xenproject.org;
>> >> Konrad Rzeszutek Wilk ; Tim (Xen.org)
>> >> 
>> >> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
>> >> resource type...
>> >>
>> >> >>> On 03.01.18 at 13:19,  wrote:
>> >> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool
>> buf)
>> >> > +{
>> >> > +struct domain *d = s->domain;
>> >> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
>> >> > +
>> >> > +if ( !iorp->page )
>> >> > +return;
>> >> > +
>> >> > +page_list_add_tail(iorp->page, 
>> >> >arch.hvm_domain.ioreq_server.pages);
>> >>
>> >> Afaict s->domain is the guest, not the domain containing the
>> >> emulator. Hence this new model of freeing the pages is safe only
>> >> when the emulator domain is dead by the time the guest is being
>> >> cleaned up.
>> >
>> > From the investigations done w.r.t. the grant table pages I don't think 
>> > this
>> > is the case. The emulating domain will have references on the pages and
>> this
>> > keeps the target domain in existence, only completing domain destruction
>> when
>> > the references are finally dropped. I've tested this by leaving an emulator
>> > running whilst I 'xl destroy' the domain; the domain remains as a zombie
>> > until emulator terminates.
>> 
>> Actually, after further thinking about this, it looks as if such behavior
>> was a problem by itself if the dm domain is unprivileged: It shouldn't
>> be allowed to prevent the guest being fully cleaned up, or else it
>> would be both a meaningful memory leak as well as a domain ID one,
>> eventually leading to the inability to create new domains.
> 
> The same applies to PV backend domains grant mapping guest pages.

Sure. It is my understanding that this isn't an active problem right
now because such helper domains are being destroyed together
with their client ones; I hope I'm not wrong with this.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-04 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 04 January 2018 10:47
> To: Paul Durrant 
> Cc: JulienGrall ; Andrew Cooper
> ; George Dunlap
> ; Ian Jackson ; Wei Liu
> ; StefanoStabellini ; xen-
> de...@lists.xenproject.org; Konrad Rzeszutek Wilk
> ; Tim (Xen.org) 
> Subject: RE: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
> resource type...
> 
> >>> On 03.01.18 at 17:06,  wrote:
> >>  -Original Message-
> >> From: Jan Beulich [mailto:jbeul...@suse.com]
> >> Sent: 03 January 2018 15:48
> >> To: Paul Durrant 
> >> Cc: JulienGrall ; Andrew Cooper
> >> ; Wei Liu ; George
> >> Dunlap ; Ian Jackson
> ;
> >> Stefano Stabellini ; xen-
> de...@lists.xenproject.org;
> >> Konrad Rzeszutek Wilk ; Tim (Xen.org)
> >> 
> >> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
> >> resource type...
> >>
> >> >>> On 03.01.18 at 13:19,  wrote:
> >> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool
> buf)
> >> > +{
> >> > +struct domain *d = s->domain;
> >> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
> >> > +
> >> > +if ( !iorp->page )
> >> > +return;
> >> > +
> >> > +page_list_add_tail(iorp->page, 
> >> >arch.hvm_domain.ioreq_server.pages);
> >>
> >> Afaict s->domain is the guest, not the domain containing the
> >> emulator. Hence this new model of freeing the pages is safe only
> >> when the emulator domain is dead by the time the guest is being
> >> cleaned up.
> >
> > From the investigations done w.r.t. the grant table pages I don't think this
> > is the case. The emulating domain will have references on the pages and
> this
> > keeps the target domain in existence, only completing domain destruction
> when
> > the references are finally dropped. I've tested this by leaving an emulator
> > running whilst I 'xl destroy' the domain; the domain remains as a zombie
> > until emulator terminates.
> 
> Actually, after further thinking about this, it looks as if such behavior
> was a problem by itself if the dm domain is unprivileged: It shouldn't
> be allowed to prevent the guest being fully cleaned up, or else it
> would be both a meaningful memory leak as well as a domain ID one,
> eventually leading to the inability to create new domains.
> 

The same applies to PV backend domains grant mapping guest pages.

  Paul

> Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-04 Thread Jan Beulich
>>> On 03.01.18 at 17:06,  wrote:
>>  -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 03 January 2018 15:48
>> To: Paul Durrant 
>> Cc: JulienGrall ; Andrew Cooper
>> ; Wei Liu ; George
>> Dunlap ; Ian Jackson ;
>> Stefano Stabellini ; xen-devel@lists.xenproject.org;
>> Konrad Rzeszutek Wilk ; Tim (Xen.org)
>> 
>> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
>> resource type...
>> 
>> >>> On 03.01.18 at 13:19,  wrote:
>> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
>> > +{
>> > +struct domain *d = s->domain;
>> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
>> > +
>> > +if ( !iorp->page )
>> > +return;
>> > +
>> > +page_list_add_tail(iorp->page, 
>> >arch.hvm_domain.ioreq_server.pages);
>> 
>> Afaict s->domain is the guest, not the domain containing the
>> emulator. Hence this new model of freeing the pages is safe only
>> when the emulator domain is dead by the time the guest is being
>> cleaned up.
> 
> From the investigations done w.r.t. the grant table pages I don't think this 
> is the case. The emulating domain will have references on the pages and this 
> keeps the target domain in existence, only completing domain destruction when 
> the references are finally dropped. I've tested this by leaving an emulator 
> running whilst I 'xl destroy' the domain; the domain remains as a zombie 
> until emulator terminates.

Actually, after further thinking about this, it looks as if such behavior
was a problem by itself if the dm domain is unprivileged: It shouldn't
be allowed to prevent the guest being fully cleaned up, or else it
would be both a meaningful memory leak as well as a domain ID one,
eventually leading to the inability to create new domains.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-03 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 03 January 2018 17:05
> To: Paul Durrant <paul.durr...@citrix.com>
> Cc: JulienGrall <julien.gr...@arm.com>; Andrew Cooper
> <andrew.coop...@citrix.com>; George Dunlap
> <george.dun...@citrix.com>; Ian Jackson <ian.jack...@citrix.com>; Wei Liu
> <wei.l...@citrix.com>; StefanoStabellini <sstabell...@kernel.org>; xen-
> de...@lists.xenproject.org; Tim (Xen.org) <t...@xen.org>
> Subject: RE: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> >>> On 03.01.18 at 17:48, <paul.durr...@citrix.com> wrote:
> >>  -Original Message-
> >> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On
> Behalf
> >> Of Jan Beulich
> >> >>> On 03.01.18 at 17:06, <paul.durr...@citrix.com> wrote:
> >> >> From: Jan Beulich [mailto:jbeul...@suse.com]
> >> >> Sent: 03 January 2018 15:48
> >> >> >>> On 03.01.18 at 13:19, <paul.durr...@citrix.com> wrote:
> >> >> What is additionally confusing me is the page ownership: Wasn't
> >> >> the (original) intention to make the pages owned by the emulator
> >> >> domain rather than the guest? I seem to recall you referring to
> >> >> restrictions in do_mmu_update(), but a domain should always be
> >> >> able to map pages it owns, shouldn't it?
> >> >
> >> > I'm sure we had this discussion before. I am trying to make resource
> >> mapping
> >> > as uniform as possible so, like the grant table pages, the ioreq server
> pages
> >> > are assigned to the target domain. Otherwise the domain trying to map
> >> > resources has know which actual domain they are assigned to, rather
> than
> >> the
> >> > domain they relate to... which is pretty ugly.
> >>
> >> Didn't I suggest a slight change to the interface to actually make
> >> this not as ugly?
> >
> > Yes, you did but I didn't really want to go that way unless I absolutely had
> > to. If you'd really prefer things that way then I'll re-work the hypercall 
> > to
> > allow the domain owning the resource pages to be passed back. Maybe it
> will
> > ultimately end up neater.
> 
> A 3rd opinion wouldn't hurt before you invest much time.
> 

Andrew,

  Do you have any particular preferences on whether ioreq server pages are 
assigned to the tools domain or the target domain (the former requiring a tweak 
to the hypercall to pass back the owner of the resource pages)?

  Paul

> Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-03 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
> Of Jan Beulich
> Sent: 03 January 2018 16:41
> To: Paul Durrant <paul.durr...@citrix.com>
> Cc: StefanoStabellini <sstabell...@kernel.org>; Wei Liu
> <wei.l...@citrix.com>; Andrew Cooper <andrew.coop...@citrix.com>; Tim
> (Xen.org) <t...@xen.org>; George Dunlap <george.dun...@citrix.com>;
> JulienGrall <julien.gr...@arm.com>; xen-devel@lists.xenproject.org; Ian
> Jackson <ian.jack...@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> >>> On 03.01.18 at 17:06, <paul.durr...@citrix.com> wrote:
> >>  -Original Message-
> >> From: Jan Beulich [mailto:jbeul...@suse.com]
> >> Sent: 03 January 2018 15:48
> >> To: Paul Durrant <paul.durr...@citrix.com>
> >> Cc: JulienGrall <julien.gr...@arm.com>; Andrew Cooper
> >> <andrew.coop...@citrix.com>; Wei Liu <wei.l...@citrix.com>; George
> >> Dunlap <george.dun...@citrix.com>; Ian Jackson
> <ian.jack...@citrix.com>;
> >> Stefano Stabellini <sstabell...@kernel.org>; xen-
> de...@lists.xenproject.org;
> >> Konrad Rzeszutek Wilk <konrad.w...@oracle.com>; Tim (Xen.org)
> >> <t...@xen.org>
> >> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
> >> resource type...
> >>
> >> >>> On 03.01.18 at 13:19, <paul.durr...@citrix.com> wrote:
> >> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool
> buf)
> >> > +{
> >> > +struct domain *d = s->domain;
> >> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
> >> > +
> >> > +if ( !iorp->page )
> >> > +return;
> >> > +
> >> > +page_list_add_tail(iorp->page, 
> >> >arch.hvm_domain.ioreq_server.pages);
> >>
> >> Afaict s->domain is the guest, not the domain containing the
> >> emulator. Hence this new model of freeing the pages is safe only
> >> when the emulator domain is dead by the time the guest is being
> >> cleaned up.
> >
> > From the investigations done w.r.t. the grant table pages I don't think this
> > is the case. The emulating domain will have references on the pages and
> this
> > keeps the target domain in existence, only completing domain destruction
> when
> > the references are finally dropped. I've tested this by leaving an emulator
> > running whilst I 'xl destroy' the domain; the domain remains as a zombie
> > until emulator terminates.
> 
> Oh, right, I forgot about that aspect.
> 
> >> What is additionally confusing me is the page ownership: Wasn't
> >> the (original) intention to make the pages owned by the emulator
> >> domain rather than the guest? I seem to recall you referring to
> >> restrictions in do_mmu_update(), but a domain should always be
> >> able to map pages it owns, shouldn't it?
> >
> > I'm sure we had this discussion before. I am trying to make resource
> mapping
> > as uniform as possible so, like the grant table pages, the ioreq server 
> > pages
> > are assigned to the target domain. Otherwise the domain trying to map
> > resources has know which actual domain they are assigned to, rather than
> the
> > domain they relate to... which is pretty ugly.
> 
> Didn't I suggest a slight change to the interface to actually make
> this not as ugly?

Yes, you did but I didn't really want to go that way unless I absolutely had 
to. If you'd really prefer things that way then I'll re-work the hypercall to 
allow the domain owning the resource pages to be passed back. Maybe it will 
ultimately end up neater.

> 
> >> Furthermore you continue to use Xen heap pages rather than
> >> domain heap ones.
> >
> > Yes, this seems reasonable since Xen will always need mappings of the
> pages
> > and the aforementioned reference counting only works for Xen heap
> pages AIUI.
> 
> share_xen_page_with_guest() makes any page a Xen heap one.

Oh, that's somewhat counter-intuitive.

> See vmx_alloc_vlapic_mapping() for an example.
> 

Ok, thanks. If change back to having the pages owned by the tools domain then I 
guess this will all be avoided anyway.

  Paul

> Jan
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-03 Thread Jan Beulich
>>> On 03.01.18 at 17:06,  wrote:
>>  -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 03 January 2018 15:48
>> To: Paul Durrant 
>> Cc: JulienGrall ; Andrew Cooper
>> ; Wei Liu ; George
>> Dunlap ; Ian Jackson ;
>> Stefano Stabellini ; xen-devel@lists.xenproject.org;
>> Konrad Rzeszutek Wilk ; Tim (Xen.org)
>> 
>> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
>> resource type...
>> 
>> >>> On 03.01.18 at 13:19,  wrote:
>> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
>> > +{
>> > +struct domain *d = s->domain;
>> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
>> > +
>> > +if ( !iorp->page )
>> > +return;
>> > +
>> > +page_list_add_tail(iorp->page, 
>> >arch.hvm_domain.ioreq_server.pages);
>> 
>> Afaict s->domain is the guest, not the domain containing the
>> emulator. Hence this new model of freeing the pages is safe only
>> when the emulator domain is dead by the time the guest is being
>> cleaned up.
> 
> From the investigations done w.r.t. the grant table pages I don't think this 
> is the case. The emulating domain will have references on the pages and this 
> keeps the target domain in existence, only completing domain destruction when 
> the references are finally dropped. I've tested this by leaving an emulator 
> running whilst I 'xl destroy' the domain; the domain remains as a zombie 
> until emulator terminates.

Oh, right, I forgot about that aspect.

>> What is additionally confusing me is the page ownership: Wasn't
>> the (original) intention to make the pages owned by the emulator
>> domain rather than the guest? I seem to recall you referring to
>> restrictions in do_mmu_update(), but a domain should always be
>> able to map pages it owns, shouldn't it?
> 
> I'm sure we had this discussion before. I am trying to make resource mapping 
> as uniform as possible so, like the grant table pages, the ioreq server pages 
> are assigned to the target domain. Otherwise the domain trying to map 
> resources has know which actual domain they are assigned to, rather than the 
> domain they relate to... which is pretty ugly.

Didn't I suggest a slight change to the interface to actually make
this not as ugly?

>> Furthermore you continue to use Xen heap pages rather than
>> domain heap ones.
> 
> Yes, this seems reasonable since Xen will always need mappings of the pages 
> and the aforementioned reference counting only works for Xen heap pages AIUI.

share_xen_page_with_guest() makes any page a Xen heap one.
See vmx_alloc_vlapic_mapping() for an example.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-03 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 03 January 2018 15:48
> To: Paul Durrant 
> Cc: JulienGrall ; Andrew Cooper
> ; Wei Liu ; George
> Dunlap ; Ian Jackson ;
> Stefano Stabellini ; xen-devel@lists.xenproject.org;
> Konrad Rzeszutek Wilk ; Tim (Xen.org)
> 
> Subject: Re: [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable
> resource type...
> 
> >>> On 03.01.18 at 13:19,  wrote:
> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > +struct domain *d = s->domain;
> > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
> > +
> > +if ( !iorp->page )
> > +return;
> > +
> > +page_list_add_tail(iorp->page, 
> >arch.hvm_domain.ioreq_server.pages);
> 
> Afaict s->domain is the guest, not the domain containing the
> emulator. Hence this new model of freeing the pages is safe only
> when the emulator domain is dead by the time the guest is being
> cleaned up.

From the investigations done w.r.t. the grant table pages I don't think this is 
the case. The emulating domain will have references on the pages and this keeps 
the target domain in existence, only completing domain destruction when the 
references are finally dropped. I've tested this by leaving an emulator running 
whilst I 'xl destroy' the domain; the domain remains as a zombie until emulator 
terminates.

> I'm not only unaware of this being enforced anywhere,
> but also unconvinced this is a desirable restriction: Why shouldn't
> emulator domains be allowed to be long lived, servicing more than
> a single guest if so desired by the admin?
> 
> What is additionally confusing me is the page ownership: Wasn't
> the (original) intention to make the pages owned by the emulator
> domain rather than the guest? I seem to recall you referring to
> restrictions in do_mmu_update(), but a domain should always be
> able to map pages it owns, shouldn't it?
> 

I'm sure we had this discussion before. I am trying to make resource mapping as 
uniform as possible so, like the grant table pages, the ioreq server pages are 
assigned to the target domain. Otherwise the domain trying to map resources has 
know which actual domain they are assigned to, rather than the domain they 
relate to... which is pretty ugly.

> Furthermore you continue to use Xen heap pages rather than
> domain heap ones.
> 

Yes, this seems reasonable since Xen will always need mappings of the pages and 
the aforementioned reference counting only works for Xen heap pages AIUI.

> And finally I'm still missing the is_hvm_domain() check in
> hvm_get_ioreq_server_frame(). Nor is there a NULL check for
> the return value of get_ioreq_server(), as I notice only now.
> 

Dammit. I forgot about the is_hvm_domain() check over Christmas. I'll add those 
two.

  Paul

> Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-03 Thread Jan Beulich
>>> On 03.01.18 at 13:19,  wrote:
> +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +struct domain *d = s->domain;
> +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
> +
> +if ( !iorp->page )
> +return;
> +
> +page_list_add_tail(iorp->page, >arch.hvm_domain.ioreq_server.pages);

Afaict s->domain is the guest, not the domain containing the
emulator. Hence this new model of freeing the pages is safe only
when the emulator domain is dead by the time the guest is being
cleaned up. I'm not only unaware of this being enforced anywhere,
but also unconvinced this is a desirable restriction: Why shouldn't
emulator domains be allowed to be long lived, servicing more than
a single guest if so desired by the admin?

What is additionally confusing me is the page ownership: Wasn't
the (original) intention to make the pages owned by the emulator
domain rather than the guest? I seem to recall you referring to
restrictions in do_mmu_update(), but a domain should always be
able to map pages it owns, shouldn't it?

Furthermore you continue to use Xen heap pages rather than
domain heap ones.

And finally I'm still missing the is_hvm_domain() check in
hvm_get_ioreq_server_frame(). Nor is there a NULL check for
the return value of get_ioreq_server(), as I notice only now.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 06/11] x86/hvm/ioreq: add a new mappable resource type...

2018-01-03 Thread Paul Durrant
... XENMEM_resource_ioreq_server

This patch adds support for a new resource type that can be mapped using
the XENMEM_acquire_resource memory op.

If an emulator makes use of this resource type then, instead of mapping
gfns, the IOREQ server will allocate pages from the heap. These pages
will never be present in the P2M of the guest at any point and so are
not vulnerable to any direct attack by the guest. They are only ever
accessible by Xen and any domain that has mapping privilege over the
guest (which may or may not be limited to the domain running the emulator).

Because an emulator may continue to hold references to the pages beyond
initial domain tear-down, it is important that they are not freed during
the normal ioreq server tear-down. Instead a per-domain free-list of pages
is maintained and pages in this list are not freed until final domain
destruction.

NOTE: Use of the new resource type is not compatible with use of
  XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is
  set.

Signed-off-by: Paul Durrant 
---
Cc: Jan Beulich 
Cc: George Dunlap 
Cc: Wei Liu 
Cc: Andrew Cooper 
Cc: Ian Jackson 
Cc: Konrad Rzeszutek Wilk 
Cc: Stefano Stabellini 
Cc: Tim Deegan 
Cc: Julien Grall 

v17:
 - The use of xenheap pages means that freeing needs to be deferred until
   domain destruction. Add an explanatory paragraph to the commit comment.

v15:
 - Use xenheap pages rather than domheap pages and assign ownership to
   target domain.

v14:
 - Addressed more comments from Jan.

v13:
 - Introduce an arch_acquire_resource() as suggested by Julien (and have
   the ARM varient simply return -EOPNOTSUPP).
 - Check for ioreq server id truncation as requested by Jan.
 - Not added Jan's R-b due to substantive change from v12.

v12:
 - Addressed more comments from Jan.
 - Dropped George's A-b and Wei's R-b because of material change.

v11:
 - Addressed more comments from Jan.

v10:
 - Addressed comments from Jan.

v8:
 - Re-base on new boilerplate.
 - Adjust function signature of hvm_get_ioreq_server_frame(), and test
   whether the bufioreq page is present.

v5:
 - Use get_ioreq_server() function rather than indexing array directly.
 - Add more explanation into comments to state than mapping guest frames
   and allocation of pages for ioreq servers are not simultaneously
   permitted.
 - Add a comment into asm/ioreq.h stating the meaning of the index
   value passed to hvm_get_ioreq_server_frame().
---
 xen/arch/x86/hvm/hvm.c   |   2 +
 xen/arch/x86/hvm/ioreq.c | 154 +++
 xen/arch/x86/mm.c|  41 +++
 xen/common/memory.c  |   3 +-
 xen/include/asm-arm/mm.h |   7 ++
 xen/include/asm-x86/hvm/domain.h |   1 +
 xen/include/asm-x86/hvm/ioreq.h  |   3 +
 xen/include/asm-x86/mm.h |   5 ++
 xen/include/public/hvm/dm_op.h   |   4 +
 xen/include/public/memory.h  |   9 +++
 10 files changed, 228 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 28bc7e4252..0d7a2f984b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -726,6 +726,8 @@ void hvm_domain_destroy(struct domain *d)
 list_del(>list);
 xfree(ioport);
 }
+
+hvm_ioreq_deinit(d);
 }
 
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 1e6f0f41e4..1568224f08 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -259,6 +259,19 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
bool buf)
 struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
 int rc;
 
+if ( iorp->page )
+{
+/*
+ * If a page has already been allocated (which will happen on
+ * demand if hvm_get_ioreq_server_frame() is called), then
+ * mapping a guest frame is not permitted.
+ */
+if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+return -EPERM;
+
+return 0;
+}
+
 if ( d->is_dying )
 return -EINVAL;
 
@@ -281,6 +294,56 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
bool buf)
 return rc;
 }
 
+static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+struct domain *d = s->domain;
+struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
+
+if ( iorp->page )
+{
+/*
+ * If a guest frame has already been mapped (which may happen
+ * on demand if hvm_get_ioreq_server_info() is called), then
+ * allocating a page is not permitted.
+ */
+if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
+return -EPERM;
+
+return 0;
+}
+
+iorp->page =
+