Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread Paul Durrant
> -Original Message-
> From: George Dunlap
> Sent: 29 August 2017 15:38
> To: Paul Durrant 
> Cc: Stefano Stabellini ; Wei Liu
> ; Andrew Cooper ; Tim
> (Xen.org) ; Jan Beulich ; Ian Jackson
> ; xen-de...@lists.xenproject.org; Roger Pau
> Monne 
> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
> new mappable resource type...
> 
> On Tue, Aug 29, 2017 at 3:31 PM, Paul Durrant 
> wrote:
> >> -Original Message-
> > [snip]
> >> >> I'm not terribly happy with allocating out-of-band pages either.  One
> >> >> of the advantages of the way things are done now (with the page
> >> >> allocated to the guest VM) is that it is more resilient to unexpected
> >> >> events:  If the domain dies before the emulator is done, you have a
> >> >> "zombie" domain until the process exits.  But once the process exits
> >> >> for any reason -- whether crashing or whatever -- the ref is freed and
> >> >> the domain can finish dying.
> >> >>
> >> >> What happens in this case if the dm process in dom0 is killed /
> >> >> segfaults before it can unmap the page?  Will the page be properly
> >> >> freed, or will it just leak?
> >> >
> >> > The page is referenced by the ioreq server in the target domain, so it
> will
> >> be freed when the target domain is destroyed.
> >>
> >> I don't understand how you're using the terms... I would have
> >> interpreted 'target domain' to me means the guest VM to which
> emulated
> >> devices are being provided, and 'ioreq server' means the process
> >> (perhaps in dom0, perhaps in a stubdomain) which is providing the
> >> emulated devices.
> >>
> >> Did you mean that it's referenced by the ioreq_server struct in the
> >> target domain, and so a put_page() will happen when the guest is
> >> destroyed?
> >
> > Terminology issues :-) By 'ioreq server' I mean the infrastructure in Xen,
> centred around struct ioreq_server. I refer to the dom0 process/stub
> domain/xengt module/whatever as the 'emulator'.
> > So, yes, the fact that the page is referenced in the ioreq server of the
> target domain means that a put_page() will happen when that domain is
> destroyed.
> 
> OK; in that case:
> 
> Acked-by: George Dunlap 
> 

Cool, thanks.

> I think that's the only other Ack you need from me.  Thanks for doing this
> work.
> 

No probs. Cheers,

  Paul

>  -George
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread George Dunlap
On Tue, Aug 29, 2017 at 3:31 PM, Paul Durrant  wrote:
>> -Original Message-
> [snip]
>> >> I'm not terribly happy with allocating out-of-band pages either.  One
>> >> of the advantages of the way things are done now (with the page
>> >> allocated to the guest VM) is that it is more resilient to unexpected
>> >> events:  If the domain dies before the emulator is done, you have a
>> >> "zombie" domain until the process exits.  But once the process exits
>> >> for any reason -- whether crashing or whatever -- the ref is freed and
>> >> the domain can finish dying.
>> >>
>> >> What happens in this case if the dm process in dom0 is killed /
>> >> segfaults before it can unmap the page?  Will the page be properly
>> >> freed, or will it just leak?
>> >
>> > The page is referenced by the ioreq server in the target domain, so it will
>> be freed when the target domain is destroyed.
>>
>> I don't understand how you're using the terms... I would have
>> interpreted 'target domain' to me means the guest VM to which emulated
>> devices are being provided, and 'ioreq server' means the process
>> (perhaps in dom0, perhaps in a stubdomain) which is providing the
>> emulated devices.
>>
>> Did you mean that it's referenced by the ioreq_server struct in the
>> target domain, and so a put_page() will happen when the guest is
>> destroyed?
>
> Terminology issues :-) By 'ioreq server' I mean the infrastructure in Xen, 
> centred around struct ioreq_server. I refer to the dom0 process/stub 
> domain/xengt module/whatever as the 'emulator'.
> So, yes, the fact that the page is referenced in the ioreq server of the 
> target domain means that a put_page() will happen when that domain is 
> destroyed.

OK; in that case:

Acked-by: George Dunlap 

I think that's the only other Ack you need from me.  Thanks for doing this work.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread Paul Durrant
> -Original Message-
[snip]
> >> I'm not terribly happy with allocating out-of-band pages either.  One
> >> of the advantages of the way things are done now (with the page
> >> allocated to the guest VM) is that it is more resilient to unexpected
> >> events:  If the domain dies before the emulator is done, you have a
> >> "zombie" domain until the process exits.  But once the process exits
> >> for any reason -- whether crashing or whatever -- the ref is freed and
> >> the domain can finish dying.
> >>
> >> What happens in this case if the dm process in dom0 is killed /
> >> segfaults before it can unmap the page?  Will the page be properly
> >> freed, or will it just leak?
> >
> > The page is referenced by the ioreq server in the target domain, so it will
> be freed when the target domain is destroyed.
> 
> I don't understand how you're using the terms... I would have
> interpreted 'target domain' to me means the guest VM to which emulated
> devices are being provided, and 'ioreq server' means the process
> (perhaps in dom0, perhaps in a stubdomain) which is providing the
> emulated devices.
> 
> Did you mean that it's referenced by the ioreq_server struct in the
> target domain, and so a put_page() will happen when the guest is
> destroyed?

Terminology issues :-) By 'ioreq server' I mean the infrastructure in Xen, 
centred around struct ioreq_server. I refer to the dom0 process/stub 
domain/xengt module/whatever as the 'emulator'.
So, yes, the fact that the page is referenced in the ioreq server of the target 
domain means that a put_page() will happen when that domain is destroyed.

  Paul

> 
>  -George
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread George Dunlap
On Tue, Aug 29, 2017 at 3:10 PM, Paul Durrant  wrote:
>> -Original Message-
>> From: George Dunlap
>> Sent: 29 August 2017 14:40
>> To: Paul Durrant 
>> Cc: Roger Pau Monne ; Stefano Stabellini
>> ; Wei Liu ; Andrew Cooper
>> ; Tim (Xen.org) ; Jan Beulich
>> ; Ian Jackson ; xen-
>> de...@lists.xenproject.org
>> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
>> new mappable resource type...
>>
>> On Fri, Aug 25, 2017 at 10:46 AM, Paul Durrant 
>> wrote:

[snip]

>> I don't immediately see an advantage to doing what you're doing here,
>> instaed of just calling hvm_alloc_ioreq_gfn().  The only reason you
>> give is that the domain is usually destroyed before the emulator
>> (meaning a short period of time where you have a 'zombie' domain), but
>> I don't see why that's an issue -- it doesn't seem like that's worth
>> the can of worms that it opens up.
>>
>
> The advantage is that the page is *never* in the guest P2M so it cannot be 
> mapped by the guest. The use of guest pages for communication between Xen and 
> an emulator is a well-known attack surface and IIRC has already been the 
> subject of at least one XSA. Until we have better infrastructure to account 
> hypervisor memory to guests then I think using alloc_domheap_page() with 
> MEMF_no_refcount is the best way.

...and hvm_alloc_ioreq_gfn() grabs pages which are in the guest's p2m
space but set aside for ioreq servers, so using those would make the
entire series pointless.  Sorry for missing that.

[snip]

>> >> > +/*
>> >> > + * Allocated IOREQ server pages are assigned to the emulating
>> >> > + * domain, not the target domain. This is because the emulator is
>> >> > + * likely to be destroyed after the target domain has been torn
>> >> > + * down, and we must use MEMF_no_refcount otherwise page
>> >> allocation
>> >> > + * could fail if the emulating domain has already reached its
>> >> > + * maximum allocation.
>> >> > + */
>> >> > +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
>> >>
>> >> I don't really like the fact that the page is not accounted for any
>> >> domain, but I can see the point in doing it like that (which you
>> >> argument in the comment).
>> >>
>> >> IIRC there where talks about tightening the accounting of memory
>> >> pages, so that ideally everything would be accounted for in the memory
>> >> assigned to the domain.
>> >>
>> >> Just some random through, but could the toolstack set aside some
>> >> memory pages (ie: not map them into the domain p2m), that could then
>> >> be used by this? (not asking you to do this here)
>> >>
>> >> And how many pages are we expecting to use for each domain? I assume
>> >> the number will be quite low.
>> >>
>> >
>> > Yes, I agree the use on MEMF_no_refcount is not ideal and you do
>> highlight an issue: I don't think there is currently an upper limit on the
>> number of ioreq servers so an emulating domain could exhaust memory
>> using the new scheme. I'll need to introduce a limit to avoid that.
>>
>> I'm not terribly happy with allocating out-of-band pages either.  One
>> of the advantages of the way things are done now (with the page
>> allocated to the guest VM) is that it is more resilient to unexpected
>> events:  If the domain dies before the emulator is done, you have a
>> "zombie" domain until the process exits.  But once the process exits
>> for any reason -- whether crashing or whatever -- the ref is freed and
>> the domain can finish dying.
>>
>> What happens in this case if the dm process in dom0 is killed /
>> segfaults before it can unmap the page?  Will the page be properly
>> freed, or will it just leak?
>
> The page is referenced by the ioreq server in the target domain, so it will 
> be freed when the target domain is destroyed.

I don't understand how you're using the terms... I would have
interpreted 'target domain' to me means the guest VM to which emulated
devices are being provided, and 'ioreq server' means the process
(perhaps in dom0, perhaps in a stubdomain) which is providing the
emulated devices.

Did you mean that it's referenced by the ioreq_server struct in the
target domain, and so a put_page() will happen when the guest is
destroyed?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread Paul Durrant
> -Original Message-
> From: George Dunlap
> Sent: 29 August 2017 14:40
> To: Paul Durrant 
> Cc: Roger Pau Monne ; Stefano Stabellini
> ; Wei Liu ; Andrew Cooper
> ; Tim (Xen.org) ; Jan Beulich
> ; Ian Jackson ; xen-
> de...@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
> new mappable resource type...
> 
> On Fri, Aug 25, 2017 at 10:46 AM, Paul Durrant 
> wrote:
> >> > +/*
> >> > + * Allocated IOREQ server pages are assigned to the emulating
> >> > + * domain, not the target domain. This is because the emulator is
> >> > + * likely to be destroyed after the target domain has been torn
> >> > + * down, and we must use MEMF_no_refcount otherwise page
> >> allocation
> >> > + * could fail if the emulating domain has already reached its
> >> > + * maximum allocation.
> >> > + */
> >> > +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
> >>
> >> I don't really like the fact that the page is not accounted for any
> >> domain, but I can see the point in doing it like that (which you
> >> argument in the comment).
> >>
> >> IIRC there where talks about tightening the accounting of memory
> >> pages, so that ideally everything would be accounted for in the memory
> >> assigned to the domain.
> >>
> >> Just some random through, but could the toolstack set aside some
> >> memory pages (ie: not map them into the domain p2m), that could then
> >> be used by this? (not asking you to do this here)
> >>
> >> And how many pages are we expecting to use for each domain? I assume
> >> the number will be quite low.
> >>
> >
> > Yes, I agree the use on MEMF_no_refcount is not ideal and you do
> highlight an issue: I don't think there is currently an upper limit on the
> number of ioreq servers so an emulating domain could exhaust memory
> using the new scheme. I'll need to introduce a limit to avoid that.
> 
> I'm not terribly happy with allocating out-of-band pages either.  One
> of the advantages of the way things are done now (with the page
> allocated to the guest VM) is that it is more resilient to unexpected
> events:  If the domain dies before the emulator is done, you have a
> "zombie" domain until the process exits.  But once the process exits
> for any reason -- whether crashing or whatever -- the ref is freed and
> the domain can finish dying.
> 
> What happens in this case if the dm process in dom0 is killed /
> segfaults before it can unmap the page?  Will the page be properly
> freed, or will it just leak?

The page is referenced by the ioreq server in the target domain, so it will be 
freed when the target domain is destroyed.

> 
> I don't immediately see an advantage to doing what you're doing here,
> instaed of just calling hvm_alloc_ioreq_gfn().  The only reason you
> give is that the domain is usually destroyed before the emulator
> (meaning a short period of time where you have a 'zombie' domain), but
> I don't see why that's an issue -- it doesn't seem like that's worth
> the can of worms that it opens up.
> 

The advantage is that the page is *never* in the guest P2M so it cannot be 
mapped by the guest. The use of guest pages for communication between Xen and 
an emulator is a well-known attack surface and IIRC has already been the 
subject of at least one XSA. Until we have better infrastructure to account 
hypervisor memory to guests then I think using alloc_domheap_page() with 
MEMF_no_refcount is the best way.

  Paul

>  -George
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread George Dunlap
On Fri, Aug 25, 2017 at 10:46 AM, Paul Durrant  wrote:
>> > +/*
>> > + * Allocated IOREQ server pages are assigned to the emulating
>> > + * domain, not the target domain. This is because the emulator is
>> > + * likely to be destroyed after the target domain has been torn
>> > + * down, and we must use MEMF_no_refcount otherwise page
>> allocation
>> > + * could fail if the emulating domain has already reached its
>> > + * maximum allocation.
>> > + */
>> > +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
>>
>> I don't really like the fact that the page is not accounted for any
>> domain, but I can see the point in doing it like that (which you
>> argument in the comment).
>>
>> IIRC there where talks about tightening the accounting of memory
>> pages, so that ideally everything would be accounted for in the memory
>> assigned to the domain.
>>
>> Just some random through, but could the toolstack set aside some
>> memory pages (ie: not map them into the domain p2m), that could then
>> be used by this? (not asking you to do this here)
>>
>> And how many pages are we expecting to use for each domain? I assume
>> the number will be quite low.
>>
>
> Yes, I agree the use on MEMF_no_refcount is not ideal and you do highlight an 
> issue: I don't think there is currently an upper limit on the number of ioreq 
> servers so an emulating domain could exhaust memory using the new scheme. 
> I'll need to introduce a limit to avoid that.

I'm not terribly happy with allocating out-of-band pages either.  One
of the advantages of the way things are done now (with the page
allocated to the guest VM) is that it is more resilient to unexpected
events:  If the domain dies before the emulator is done, you have a
"zombie" domain until the process exits.  But once the process exits
for any reason -- whether crashing or whatever -- the ref is freed and
the domain can finish dying.

What happens in this case if the dm process in dom0 is killed /
segfaults before it can unmap the page?  Will the page be properly
freed, or will it just leak?

I don't immediately see an advantage to doing what you're doing here,
instaed of just calling hvm_alloc_ioreq_gfn().  The only reason you
give is that the domain is usually destroyed before the emulator
(meaning a short period of time where you have a 'zombie' domain), but
I don't see why that's an issue -- it doesn't seem like that's worth
the can of worms that it opens up.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-29 Thread George Dunlap
On 08/25/2017 10:46 AM, Paul Durrant wrote:
>> -Original Message-
>> From: Roger Pau Monne
>> Sent: 25 August 2017 10:32
>> To: Paul Durrant 
>> Cc: xen-de...@lists.xenproject.org; Stefano Stabellini
>> ; Wei Liu ; George Dunlap
>> ; Andrew Cooper
>> ; Ian Jackson ; Tim
>> (Xen.org) ; Jan Beulich 
>> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
>> new mappable resource type...
>>
>> On Tue, Aug 22, 2017 at 03:51:06PM +0100, Paul Durrant wrote:
>>> ... XENMEM_resource_ioreq_server
>>>
>>> This patch adds support for a new resource type that can be mapped using
>>> the XENMEM_acquire_resource memory op.
>>>
>>> If an emulator makes use of this resource type then, instead of mapping
>>> gfns, the IOREQ server will allocate pages from the heap. These pages
>>> will never be present in the P2M of the guest at any point and so are
>>> not vulnerable to any direct attack by the guest. They are only ever
>>> accessible by Xen and any domain that has mapping privilege over the
>>> guest (which may or may not be limited to the domain running the
>> emulator).
>>>
>>> NOTE: Use of the new resource type is not compatible with use of
>>>   XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns
>> flag is
>>>   set.
>>>
>>> Signed-off-by: Paul Durrant 
>>> ---
>>> Cc: Jan Beulich 
>>> Cc: Andrew Cooper 
>>> Cc: George Dunlap 
>>> Cc: Ian Jackson 
>>> Cc: Konrad Rzeszutek Wilk 
>>> Cc: Stefano Stabellini 
>>> Cc: Tim Deegan 
>>> Cc: Wei Liu 
>>> ---
>>>  xen/arch/x86/hvm/ioreq.c| 136
>> 
>>>  xen/arch/x86/mm.c   |  27 
>>>  xen/include/asm-x86/hvm/ioreq.h |   2 +
>>>  xen/include/public/hvm/dm_op.h  |   4 ++
>>>  xen/include/public/memory.h |   3 +
>>>  5 files changed, 172 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
>>> index 795c198f95..9e6838dab6 100644
>>> --- a/xen/arch/x86/hvm/ioreq.c
>>> +++ b/xen/arch/x86/hvm/ioreq.c
>>> @@ -231,6 +231,15 @@ static int hvm_map_ioreq_gfn(struct
>> hvm_ioreq_server *s, bool buf)
>>>  struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
>>>  int rc;
>>>
>>> +if ( iorp->page )
>>> +{
>>> +/* Make sure the page has not been allocated */
>>> +if ( gfn_eq(iorp->gfn, INVALID_GFN) )
>>> +return -EPERM;
>>> +
>>> +return 0;
>>
>> EEXIST? (See comment below, which I think also applies here).
>>
>>> +}
>>> +
>>>  if ( d->is_dying )
>>>  return -EINVAL;
>>>
>>> @@ -253,6 +262,60 @@ static int hvm_map_ioreq_gfn(struct
>> hvm_ioreq_server *s, bool buf)
>>>  return rc;
>>>  }
>>>
>>> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
>>> +{
>>> +struct domain *currd = current->domain;
>>> +struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
>>> +
>>> +if ( iorp->page )
>>> +{
>>> +/* Make sure the page has not been mapped */
>>> +if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
>>> +return -EPERM;
>>> +
>>> +return 0;
>>
>> Shouldn't this return EEXIST? Page has already been allocated by a
>> previous call AFAICT, and it seems like a possible error/misbehavior
>> to try to do it twice.
>>
> 
> The checks are there to prevent a caller from trying to mix the legacy and 
> new methods of mapping ioreq server pages so EPERM (i.e. 'operation not 
> permitted') seems like the correct error. I agree that it's not obvious, at 
> this inner level, that I do think this is right. I'm open to debate about 
> this though.

-EBUSY then?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-25 Thread Paul Durrant
> -Original Message-
> From: Roger Pau Monne
> Sent: 25 August 2017 10:53
> To: Paul Durrant 
> Cc: xen-de...@lists.xenproject.org; Stefano Stabellini
> ; Wei Liu ; George Dunlap
> ; Andrew Cooper
> ; Ian Jackson ; Tim
> (Xen.org) ; Jan Beulich 
> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
> new mappable resource type...
> 
> On Fri, Aug 25, 2017 at 10:46:48AM +0100, Paul Durrant wrote:
> > > -Original Message-
> > > From: Roger Pau Monne
> > > > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > > > +{
> > > > +struct domain *currd = current->domain;
> > > > +struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > > > +
> > > > +if ( iorp->page )
> > > > +{
> > > > +/* Make sure the page has not been mapped */
> > > > +if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > > > +return -EPERM;
> > > > +
> > > > +return 0;
> > >
> > > Shouldn't this return EEXIST? Page has already been allocated by a
> > > previous call AFAICT, and it seems like a possible error/misbehavior
> > > to try to do it twice.
> > >
> >
> > The checks are there to prevent a caller from trying to mix the legacy and
> new methods of mapping ioreq server pages so EPERM (i.e. 'operation not
> permitted') seems like the correct error. I agree that it's not obvious, at 
> this
> inner level, that I do think this is right. I'm open to debate about this 
> though.
> 
> Oh, I was referring to the 'return 0;', not the 'return -EPERM;' (that
> looks fine to me).
> 
> The 'return 0;' means that the page is already setup (if I understood
> this correctly), that's why I was wondering whether it should return
> -EEXIST instead.
> 

No, it has to be this way because of the 'on demand' nature of the allocation. 
If an emulator uses the new resource mapping scheme then the pages will be 
allocated on the first mapping attempt, but if the emulator unmaps and then 
re-maps that should succeed whereas if I return -EEXIST here then that would 
propagate back up through the second mapping hypercall.

  Paul

> Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-25 Thread Roger Pau Monne
On Fri, Aug 25, 2017 at 10:46:48AM +0100, Paul Durrant wrote:
> > -Original Message-
> > From: Roger Pau Monne
> > > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > > +{
> > > +struct domain *currd = current->domain;
> > > +struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > > +
> > > +if ( iorp->page )
> > > +{
> > > +/* Make sure the page has not been mapped */
> > > +if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > > +return -EPERM;
> > > +
> > > +return 0;
> > 
> > Shouldn't this return EEXIST? Page has already been allocated by a
> > previous call AFAICT, and it seems like a possible error/misbehavior
> > to try to do it twice.
> > 
> 
> The checks are there to prevent a caller from trying to mix the legacy and 
> new methods of mapping ioreq server pages so EPERM (i.e. 'operation not 
> permitted') seems like the correct error. I agree that it's not obvious, at 
> this inner level, that I do think this is right. I'm open to debate about 
> this though.

Oh, I was referring to the 'return 0;', not the 'return -EPERM;' (that
looks fine to me).

The 'return 0;' means that the page is already setup (if I understood
this correctly), that's why I was wondering whether it should return
-EEXIST instead.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-25 Thread Paul Durrant
> -Original Message-
> From: Roger Pau Monne
> Sent: 25 August 2017 10:32
> To: Paul Durrant 
> Cc: xen-de...@lists.xenproject.org; Stefano Stabellini
> ; Wei Liu ; George Dunlap
> ; Andrew Cooper
> ; Ian Jackson ; Tim
> (Xen.org) ; Jan Beulich 
> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
> new mappable resource type...
> 
> On Tue, Aug 22, 2017 at 03:51:06PM +0100, Paul Durrant wrote:
> > ... XENMEM_resource_ioreq_server
> >
> > This patch adds support for a new resource type that can be mapped using
> > the XENMEM_acquire_resource memory op.
> >
> > If an emulator makes use of this resource type then, instead of mapping
> > gfns, the IOREQ server will allocate pages from the heap. These pages
> > will never be present in the P2M of the guest at any point and so are
> > not vulnerable to any direct attack by the guest. They are only ever
> > accessible by Xen and any domain that has mapping privilege over the
> > guest (which may or may not be limited to the domain running the
> emulator).
> >
> > NOTE: Use of the new resource type is not compatible with use of
> >   XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns
> flag is
> >   set.
> >
> > Signed-off-by: Paul Durrant 
> > ---
> > Cc: Jan Beulich 
> > Cc: Andrew Cooper 
> > Cc: George Dunlap 
> > Cc: Ian Jackson 
> > Cc: Konrad Rzeszutek Wilk 
> > Cc: Stefano Stabellini 
> > Cc: Tim Deegan 
> > Cc: Wei Liu 
> > ---
> >  xen/arch/x86/hvm/ioreq.c| 136
> 
> >  xen/arch/x86/mm.c   |  27 
> >  xen/include/asm-x86/hvm/ioreq.h |   2 +
> >  xen/include/public/hvm/dm_op.h  |   4 ++
> >  xen/include/public/memory.h |   3 +
> >  5 files changed, 172 insertions(+)
> >
> > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> > index 795c198f95..9e6838dab6 100644
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -231,6 +231,15 @@ static int hvm_map_ioreq_gfn(struct
> hvm_ioreq_server *s, bool buf)
> >  struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> >  int rc;
> >
> > +if ( iorp->page )
> > +{
> > +/* Make sure the page has not been allocated */
> > +if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> > +return -EPERM;
> > +
> > +return 0;
> 
> EEXIST? (See comment below, which I think also applies here).
> 
> > +}
> > +
> >  if ( d->is_dying )
> >  return -EINVAL;
> >
> > @@ -253,6 +262,60 @@ static int hvm_map_ioreq_gfn(struct
> hvm_ioreq_server *s, bool buf)
> >  return rc;
> >  }
> >
> > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > +struct domain *currd = current->domain;
> > +struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > +if ( iorp->page )
> > +{
> > +/* Make sure the page has not been mapped */
> > +if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > +return -EPERM;
> > +
> > +return 0;
> 
> Shouldn't this return EEXIST? Page has already been allocated by a
> previous call AFAICT, and it seems like a possible error/misbehavior
> to try to do it twice.
> 

The checks are there to prevent a caller from trying to mix the legacy and new 
methods of mapping ioreq server pages so EPERM (i.e. 'operation not permitted') 
seems like the correct error. I agree that it's not obvious, at this inner 
level, that I do think this is right. I'm open to debate about this though.

> > +}
> > +
> > +/*
> > + * Allocated IOREQ server pages are assigned to the emulating
> > + * domain, not the target domain. This is because the emulator is
> > + * likely to be destroyed after the target domain has been torn
> > + * down, and we must use MEMF_no_refcount otherwise page
> allocation
> > + * could fail if the emulating domain has already reached its
> > + * maximum allocation.
> > + */
> > +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
> 
> I don't really like the fact that the page is not accounted for any
> domain, but I can see the point in doing it like that (which you
> argument in the comment).
> 
> IIRC there where talks about tightening the accounting of memory
> pages, so that ideally everything would be accounted for in 

Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...

2017-08-25 Thread Roger Pau Monné
On Tue, Aug 22, 2017 at 03:51:06PM +0100, Paul Durrant wrote:
> ... XENMEM_resource_ioreq_server
> 
> This patch adds support for a new resource type that can be mapped using
> the XENMEM_acquire_resource memory op.
> 
> If an emulator makes use of this resource type then, instead of mapping
> gfns, the IOREQ server will allocate pages from the heap. These pages
> will never be present in the P2M of the guest at any point and so are
> not vulnerable to any direct attack by the guest. They are only ever
> accessible by Xen and any domain that has mapping privilege over the
> guest (which may or may not be limited to the domain running the emulator).
> 
> NOTE: Use of the new resource type is not compatible with use of
>   XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is
>   set.
> 
> Signed-off-by: Paul Durrant 
> ---
> Cc: Jan Beulich 
> Cc: Andrew Cooper 
> Cc: George Dunlap 
> Cc: Ian Jackson 
> Cc: Konrad Rzeszutek Wilk 
> Cc: Stefano Stabellini 
> Cc: Tim Deegan 
> Cc: Wei Liu 
> ---
>  xen/arch/x86/hvm/ioreq.c| 136 
> 
>  xen/arch/x86/mm.c   |  27 
>  xen/include/asm-x86/hvm/ioreq.h |   2 +
>  xen/include/public/hvm/dm_op.h  |   4 ++
>  xen/include/public/memory.h |   3 +
>  5 files changed, 172 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 795c198f95..9e6838dab6 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -231,6 +231,15 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
> bool buf)
>  struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
>  int rc;
>  
> +if ( iorp->page )
> +{
> +/* Make sure the page has not been allocated */
> +if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> +return -EPERM;
> +
> +return 0;

EEXIST? (See comment below, which I think also applies here).

> +}
> +
>  if ( d->is_dying )
>  return -EINVAL;
>  
> @@ -253,6 +262,60 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
> bool buf)
>  return rc;
>  }
>  
> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +struct domain *currd = current->domain;
> +struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +
> +if ( iorp->page )
> +{
> +/* Make sure the page has not been mapped */
> +if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> +return -EPERM;
> +
> +return 0;

Shouldn't this return EEXIST? Page has already been allocated by a
previous call AFAICT, and it seems like a possible error/misbehavior
to try to do it twice.

> +}
> +
> +/*
> + * Allocated IOREQ server pages are assigned to the emulating
> + * domain, not the target domain. This is because the emulator is
> + * likely to be destroyed after the target domain has been torn
> + * down, and we must use MEMF_no_refcount otherwise page allocation
> + * could fail if the emulating domain has already reached its
> + * maximum allocation.
> + */
> +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);

I don't really like the fact that the page is not accounted for any
domain, but I can see the point in doing it like that (which you
argument in the comment).

IIRC there where talks about tightening the accounting of memory
pages, so that ideally everything would be accounted for in the memory
assigned to the domain.

Just some random through, but could the toolstack set aside some
memory pages (ie: not map them into the domain p2m), that could then
be used by this? (not asking you to do this here)

And how many pages are we expecting to use for each domain? I assume
the number will be quite low.

> +if ( !iorp->page )
> +return -ENOMEM;
> +
> +get_page(iorp->page, currd);

Do you really need this get_page? (page is already assigned to currd).

> +
> +iorp->va = __map_domain_page_global(iorp->page);
> +if ( !iorp->va )
> +{
> +put_page(iorp->page);
> +iorp->page = NULL;
> +return -ENOMEM;
> +}
> +
> +clear_page(iorp->va);
> +return 0;
> +}
> +
> +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +
> +if ( !iorp->page )
> +return;
> +
> +unmap_domain_page_global(iorp->va);
> +iorp->va = NULL;
> +
> +put_page(iorp->page);
> +iorp->page = NULL;
> +}
> +
>  bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
>  {
>  const struct hvm_ioreq_server *s;
> @@ -457,6 +520,27 @@ static void hvm_ioreq_server_unmap_pages(struct 
> hvm_ioreq_server *s)
>  hvm_unmap_ioreq_gfn(s, false);
>  }
>  
> +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
> +{
> +int rc = -ENOMEM;
> +
> +rc = hvm_alloc_ioreq_mfn(s, false);
> +
> +if ( !rc && (s->bufioreq_handling != HVM