Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-04-02 Thread Andrew Cooper
On 10/01/2019 15:46, Paul Durrant wrote:
>> -Original Message-
>> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
>> Sent: 10 January 2019 15:31
>> To: Paul Durrant ; xen-devel@lists.xenproject.org
>> Cc: Stefano Stabellini ; Wei Liu
>> ; Razvan Cojocaru ; Konrad
>> Rzeszutek Wilk ; George Dunlap
>> ; Andrew Cooper ; Ian
>> Jackson ; Tim (Xen.org) ; Julien
>> Grall ; Tamas K Lengyel ; Jan
>> Beulich ; Roger Pau Monne 
>> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
>> for sync requests.
>>
>> On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
>>>> -Original Message-
>>>>
>>>> The memory for the asynchronous ring and the synchronous channels
>>>> will
>>>> be allocated from domheap and mapped to the controlling domain
>>>> using the
>>>> foreignmemory_map_resource interface. Unlike the current
>>>> implementation,
>>>> the allocated pages are not part of the target DomU, so they will
>>>> not be
>>>> reclaimed when the vm_event domain is disabled.
>>> Why re-invent the wheel here? The ioreq infrastructure already does
>>> pretty much everything you need AFAICT.
>>>
>>>   Paul
>>>
>> Hi Paul,
>>
>> I'm still struggling to understand how the vm_event subsystem could be
>> integrated with an IOREQ server.
>>
>> An IOREQ server shares with the emulator 2 pages, one for ioreqs and
>> one for buffered_ioreqs. For vm_event we need to share also one or more
>> pages for the async ring and a few pages for the slotted synchronous
>> vm_events.
>> So, to my understanding, your idea to use the ioreq infrastructure for
>> vm_events is basically to replace the custom signalling (event channels
>> + ring / custom states) with ioreqs. Since the
>> vm_event_request/response structures are larger than 8 bytes, the
>> "data_is_ptr" flag should be used in conjunction with the addresses
>> (indexes) from the shared vm_event buffers.
>>
>> Is this the mechanism you had in mind?
>>
> Yes, that's roughly what I hoped might be possible. If that is too cumbersome 
> though then it should at least be feasible to mimic the ioreq code's page 
> allocation functions and code up vm_event buffers as another type of mappable 
> resource.

So, I've finally realised what has been subtly nagging at me for a while
from the suggestion to use ioreqs.  vm_event and ioreq have completely
different operations and semantics as far as the code in Xen is concerned.

The semantics for ioreq servers are "given a specific MMIO/PIO/CFG
action, which one of $N emulators should handle it".

vm_event on the other hand behaves just like the VT-x/SVM vmexit
intercepts.  It is "tell me when the guest does $X".  There isn't a
sensible case for having multiple vm_event consumers for a domain.

There is no overlap in the format of data used, or the cases where an
event would be sent.  Therefore, I think trying to implement vm_event in
terms of the ioreq server infrastructure is a short sighted move.

Beyond that, the only similarity is the slotted ring setup, which can be
entirely abstracted away behind resource mapping.  This actually comes
with a bonus in that vm_event will no longer strictly be tied to HVM
guests by virtue of its ring living in an HVMPARAM.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-10 Thread Paul Durrant
> -Original Message-
> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
> Sent: 10 January 2019 15:31
> To: Paul Durrant ; xen-devel@lists.xenproject.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Razvan Cojocaru ; Konrad
> Rzeszutek Wilk ; George Dunlap
> ; Andrew Cooper ; Ian
> Jackson ; Tim (Xen.org) ; Julien
> Grall ; Tamas K Lengyel ; Jan
> Beulich ; Roger Pau Monne 
> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
> for sync requests.
> 
> On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
> > > -Original Message-
> > >
> > > The memory for the asynchronous ring and the synchronous channels
> > > will
> > > be allocated from domheap and mapped to the controlling domain
> > > using the
> > > foreignmemory_map_resource interface. Unlike the current
> > > implementation,
> > > the allocated pages are not part of the target DomU, so they will
> > > not be
> > > reclaimed when the vm_event domain is disabled.
> >
> > Why re-invent the wheel here? The ioreq infrastructure already does
> > pretty much everything you need AFAICT.
> >
> >   Paul
> >
> 
> Hi Paul,
> 
> I'm still struggling to understand how the vm_event subsystem could be
> integrated with an IOREQ server.
> 
> An IOREQ server shares with the emulator 2 pages, one for ioreqs and
> one for buffered_ioreqs. For vm_event we need to share also one or more
> pages for the async ring and a few pages for the slotted synchronous
> vm_events.
> So, to my understanding, your idea to use the ioreq infrastructure for
> vm_events is basically to replace the custom signalling (event channels
> + ring / custom states) with ioreqs. Since the
> vm_event_request/response structures are larger than 8 bytes, the
> "data_is_ptr" flag should be used in conjunction with the addresses
> (indexes) from the shared vm_event buffers.
> 
> Is this the mechanism you had in mind?
> 

Yes, that's roughly what I hoped might be possible. If that is too cumbersome 
though then it should at least be feasible to mimic the ioreq code's page 
allocation functions and code up vm_event buffers as another type of mappable 
resource.

  Paul

> Many thanks,
> Petre

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-10 Thread Petre Ovidiu PIRCALABU
On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
> > -Original Message-
> > 
> > The memory for the asynchronous ring and the synchronous channels
> > will
> > be allocated from domheap and mapped to the controlling domain
> > using the
> > foreignmemory_map_resource interface. Unlike the current
> > implementation,
> > the allocated pages are not part of the target DomU, so they will
> > not be
> > reclaimed when the vm_event domain is disabled.
> 
> Why re-invent the wheel here? The ioreq infrastructure already does
> pretty much everything you need AFAICT.
> 
>   Paul
> 

Hi Paul,

I'm still struggling to understand how the vm_event subsystem could be
integrated with an IOREQ server.

An IOREQ server shares with the emulator 2 pages, one for ioreqs and
one for buffered_ioreqs. For vm_event we need to share also one or more
pages for the async ring and a few pages for the slotted synchronous
vm_events.
So, to my understanding, your idea to use the ioreq infrastructure for
vm_events is basically to replace the custom signalling (event channels
+ ring / custom states) with ioreqs. Since the
vm_event_request/response structures are larger than 8 bytes, the
"data_is_ptr" flag should be used in conjunction with the addresses
(indexes) from the shared vm_event buffers. 

Is this the mechanism you had in mind?

Many thanks,
Petre

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-10 Thread Razvan Cojocaru

On 1/10/19 11:58 AM, Paul Durrant wrote:

-Original Message-




Why re-invent the wheel here? The ioreq infrastructure already does
pretty much everything you need AFAICT.

Paul


I wanted preseve as much as possible from the existing vm_event DOMCTL
interface and add only the necessary code to allocate and map the
vm_event_pages.


That means we have two subsystems duplicating a lot of functionality

though. It would be much better to use ioreq server if possible than
provide a compatibility interface via DOMCTL.

Just to clarify the compatibility issue: there's a third element between
Xen and the introspection application, the Linux kernel which needs to
be fairly recent for the whole ioreq machinery to work. The qemu code
also seems to fallback to the old way of working if that's the case.



Tht'a corrent. For IOREQ server there is a fall-back mechanism when privcmd 
doesn't support resource mapping.


This means that there's a choice to be made here: either we keep
backwards compatibility with the old vm_event interface (in which case
we can't drop the waitqueue code), or we switch to the new one and leave
older setups in the dust (but there's less code duplication and we can
get rid of the waitqueue code).



I don't know what your compatibility model is. QEMU needs to maintain 
compatibility across various different versions of Xen and Linux so there are 
many shims and much compat code. You may not need this.


Our current model is: deploy a special guest (that we call a SVA - short 
for security virtual appliance), with its own kernel and applications, 
that for all intents and purposes will act dom0-like.


So in that scenario, we control the guest kernel so backwards 
compatibility for the case where the kernel does not support the proper 
ioctl is not a priority. That said, it might very well be an issue for 
someone, and we'd like to be well-behaved citizens and not inconvenience 
other vm_event consumers. Tamas, is this something you'd be concerned about?


What we do care about is being able to fallback in the case where the 
host hypervisor does not know anything about the new ioreq 
infrastructure. IOW, nobody can stop a client from running a Xen 
4.7-based XenServer on top of which our introspection guest will not be 
able to use the new ioreq code even if it's using the latest kernel. But 
that can be done at application level and would not require 
hypervisor-level backwards compatibility support (whereas in the first 
case - an old kernel - it would).


On top of all of this there's Andrew's concern of being able to get rid 
of the current vm_event waitqueue code that's making migration brittle.


So, if I understand the situation correctly, we need to negotiate the 
following:


1. Should we try to switch to the ioreq infrastructure for vm_event or 
use our custom one? If I'm remembering things correctly, Paul and Jan 
are for it, Andrew is somewhat against it, Tamas has not expressed a 
preference.


2. However we approach the new code, should we or should we not also 
provide a backwards compatibility layer in the hypervisor? We don't need 
it, but somebody might and it's probably not a good idea to design based 
entirely on the needs of one use-case. Tamas may have different needs 
here, and maybe other members of the xen-devel community as well. Andrew 
prefers that we don't since that removes the waitqueue code.


To reiterate how this got started: we want to move the ring buffer 
memory from the guest to the hypervisor (we've had cases of OSes 
reclaiming that page after the first introspection application exit), 
and we want to make that memory bigger (so that more events will fit 
into it, carrying more information (bigger events)). That's essentially 
all we're after.



Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-10 Thread Paul Durrant
> -Original Message-

> >>>
> >>> Why re-invent the wheel here? The ioreq infrastructure already does
> >>> pretty much everything you need AFAICT.
> >>>
> >>>Paul
> >>
> >> I wanted preseve as much as possible from the existing vm_event DOMCTL
> >> interface and add only the necessary code to allocate and map the
> >> vm_event_pages.
> >
> > That means we have two subsystems duplicating a lot of functionality
> though. It would be much better to use ioreq server if possible than
> provide a compatibility interface via DOMCTL.
> 
> Just to clarify the compatibility issue: there's a third element between
> Xen and the introspection application, the Linux kernel which needs to
> be fairly recent for the whole ioreq machinery to work. The qemu code
> also seems to fallback to the old way of working if that's the case.
> 

Tht'a corrent. For IOREQ server there is a fall-back mechanism when privcmd 
doesn't support resource mapping.

> This means that there's a choice to be made here: either we keep
> backwards compatibility with the old vm_event interface (in which case
> we can't drop the waitqueue code), or we switch to the new one and leave
> older setups in the dust (but there's less code duplication and we can
> get rid of the waitqueue code).
> 

I don't know what your compatibility model is. QEMU needs to maintain 
compatibility across various different versions of Xen and Linux so there are 
many shims and much compat code. You may not need this.

> In any event, it's not very clear (to me, at least) how the envisioned
> ioreq replacement should work. I assume we're meant to use the whole
> infrastructure (as opposed to what we're now doing, which is to only use
> the map-hypervisor-memory part), i.e. both mapping and signaling. Could
> we discuss this in more detail? Are there any docs on this or ioreq
> minimal clients (like xen-access.c is for vm_event) we might use?
> 

I don't know how much of the infrastructure will be re-usable for you. Resource 
mapping itself is supposed to be generic, not specific to IOREQ server. Indeed 
it already supports grant table mapping too. So IMO you should at least expose 
your shared pages using this mechanism.

It would be nice if you could also re-use ioreqs (and bufioreqs) for sending 
your data but they may well be a poor fit... but you could probably cut'n'paste 
some of the init and teardown code to set up your shared pages.

  Paul

> 
> Thanks,
> Razvan
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-09 Thread Razvan Cojocaru

On 12/20/18 4:28 PM, Paul Durrant wrote:

-Original Message-
From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
Sent: 20 December 2018 14:26
To: Paul Durrant ; xen-devel@lists.xenproject.org
Cc: Stefano Stabellini ; Wei Liu
; Razvan Cojocaru ; Konrad
Rzeszutek Wilk ; George Dunlap
; Andrew Cooper ; Ian
Jackson ; Tim (Xen.org) ; Julien
Grall ; Tamas K Lengyel ; Jan
Beulich ; Roger Pau Monne 
Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
for sync requests.

On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:

The memory for the asynchronous ring and the synchronous channels
will
be allocated from domheap and mapped to the controlling domain
using the
foreignmemory_map_resource interface. Unlike the current
implementation,
the allocated pages are not part of the target DomU, so they will
not be
reclaimed when the vm_event domain is disabled.


Why re-invent the wheel here? The ioreq infrastructure already does
pretty much everything you need AFAICT.

   Paul


I wanted preseve as much as possible from the existing vm_event DOMCTL
interface and add only the necessary code to allocate and map the
vm_event_pages.


That means we have two subsystems duplicating a lot of functionality though. It 
would be much better to use ioreq server if possible than provide a 
compatibility interface via DOMCTL.


Just to clarify the compatibility issue: there's a third element between 
Xen and the introspection application, the Linux kernel which needs to 
be fairly recent for the whole ioreq machinery to work. The quemu code 
also seems to fallback to the old way of working if that's the case.


This means that there's a choice to be made here: either we keep 
backwards compatibility with the old vm_event interface (in which case 
we can't drop the waitqueue code), or we switch to the new one and leave 
older setups in the dust (but there's less code duplication and we can 
get rid of the waitqueue code).


In any event, it's not very clear (to me, at least) how the envisioned 
ioreq replacement should work. I assume we're meant to use the whole 
infrastructure (as opposed to what we're now doing, which is to only use 
the map-hypervisor-memory part), i.e. both mapping and signaling. Could 
we discuss this in more detail? Are there any docs on this or ioreq 
minimal clients (like xen-access.c is for vm_event) we might use?



Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-08 Thread Paul Durrant
> -Original Message-
> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
> Sent: 08 January 2019 16:14
> To: Paul Durrant ; xen-devel@lists.xenproject.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Razvan Cojocaru ; Konrad
> Rzeszutek Wilk ; George Dunlap
> ; Andrew Cooper ; Ian
> Jackson ; Tim (Xen.org) ; Julien
> Grall ; Tamas K Lengyel ; Jan
> Beulich ; Roger Pau Monne 
> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
> for sync requests.
> 
> On Tue, 2019-01-08 at 15:08 +, Paul Durrant wrote:
> > >
> > >
> > > Also, for the current vm_event implementation, other than using the
> > > hvm_params to specify the ring page gfn, I couldn't see any reason
> > > why
> > > it should be limited to HVM guests only. Is it feasible to assume
> > > the
> > > vm_event mechanism will not ever be extended to PV guests?
> > >
> >
> > Unless you limit things to HVM (and PVH) guests then I guess you'll
> > run into the same page ownership problems that ioreq server ran into
> > (due to a PV guest being allowed to map any page assigned to it...
> > including those that may be 'resources' it should not be able to see
> > directly). Any particular reason why you'd definitely want to support
> > pure PV guests?
> >
> >   Paul
> 
> No, but at this point I just want to make sure I'm not limiting the
> vm_events usage.

Ok, but given that a framework (i.e. ioreq) exists for HVM/PVH guests then IMO 
it makes sense to target those guests first and then figure out how to make 
things work for PV later if need be.

  Paul

> 
> Many thanks,
> Petre

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-08 Thread Petre Ovidiu PIRCALABU
On Tue, 2019-01-08 at 15:08 +, Paul Durrant wrote:
> > 
> > 
> > Also, for the current vm_event implementation, other than using the
> > hvm_params to specify the ring page gfn, I couldn't see any reason
> > why
> > it should be limited to HVM guests only. Is it feasible to assume
> > the
> > vm_event mechanism will not ever be extended to PV guests?
> > 
> 
> Unless you limit things to HVM (and PVH) guests then I guess you'll
> run into the same page ownership problems that ioreq server ran into
> (due to a PV guest being allowed to map any page assigned to it...
> including those that may be 'resources' it should not be able to see
> directly). Any particular reason why you'd definitely want to support
> pure PV guests?
> 
>   Paul

No, but at this point I just want to make sure I'm not limiting the
vm_events usage.

Many thanks,
Petre

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-08 Thread Paul Durrant
> -Original Message-
> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
> Sent: 08 January 2019 14:50
> To: Paul Durrant ; xen-devel@lists.xenproject.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Razvan Cojocaru ; Konrad
> Rzeszutek Wilk ; George Dunlap
> ; Andrew Cooper ; Ian
> Jackson ; Tim (Xen.org) ; Julien
> Grall ; Tamas K Lengyel ; Jan
> Beulich ; Roger Pau Monne 
> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
> for sync requests.
> 
> On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
> > > -Original Message-
> > >
> > > The memory for the asynchronous ring and the synchronous channels
> > > will
> > > be allocated from domheap and mapped to the controlling domain
> > > using the
> > > foreignmemory_map_resource interface. Unlike the current
> > > implementation,
> > > the allocated pages are not part of the target DomU, so they will
> > > not be
> > > reclaimed when the vm_event domain is disabled.
> >
> > Why re-invent the wheel here? The ioreq infrastructure already does
> > pretty much everything you need AFAICT.
> >
> >   Paul
> >
> 
> To my understanding, the current implementation of the ioreq server is
> limited to just 2 allocated pages (ioreq and bufioreq)

The current implementation is, but the direct resource mapping hypercall 
removed any limit from the API. It should be feasible to extend to as many 
pages as is needed, hence:

#define XENMEM_resource_ioreq_server_frame_ioreq(n) (1 + (n))

...in the public header.

> The main goal of the new vm_event implementation proposal is to be more
> flexible in respect of the number of pages necessary for the
> request/response buffers ( the slotted structure which holds one
> request/response per vcpu or the ring spanning multiple pages in the
> previous proposal).
> Is it feasible to extend the current ioreq server implementation
> allocate dynamically a specific number of pages?

Yes, absolutely. At the moment the single page for synchronous emulation 
requests limits HVM guests to 128 vcpus. When we want to go past this limit 
then multiple pages will be necessary... which is why the hypercall was 
designed the way it is.

> 
> Also, for the current vm_event implementation, other than using the
> hvm_params to specify the ring page gfn, I couldn't see any reason why
> it should be limited to HVM guests only. Is it feasible to assume the
> vm_event mechanism will not ever be extended to PV guests?
> 

Unless you limit things to HVM (and PVH) guests then I guess you'll run into 
the same page ownership problems that ioreq server ran into (due to a PV guest 
being allowed to map any page assigned to it... including those that may be 
'resources' it should not be able to see directly). Any particular reason why 
you'd definitely want to support pure PV guests?

  Paul

> Many thanks,
> Petre
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2019-01-08 Thread Petre Ovidiu PIRCALABU
On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
> > -Original Message-
> > 
> > The memory for the asynchronous ring and the synchronous channels
> > will
> > be allocated from domheap and mapped to the controlling domain
> > using the
> > foreignmemory_map_resource interface. Unlike the current
> > implementation,
> > the allocated pages are not part of the target DomU, so they will
> > not be
> > reclaimed when the vm_event domain is disabled.
> 
> Why re-invent the wheel here? The ioreq infrastructure already does
> pretty much everything you need AFAICT.
> 
>   Paul
> 

To my understanding, the current implementation of the ioreq server is
limited to just 2 allocated pages (ioreq and bufioreq). 
The main goal of the new vm_event implementation proposal is to be more
flexible in respect of the number of pages necessary for the
request/response buffers ( the slotted structure which holds one
request/response per vcpu or the ring spanning multiple pages in the
previous proposal).
Is it feasible to extend the current ioreq server implementation
allocate dynamically a specific number of pages?

Also, for the current vm_event implementation, other than using the
hvm_params to specify the ring page gfn, I couldn't see any reason why
it should be limited to HVM guests only. Is it feasible to assume the
vm_event mechanism will not ever be extended to PV guests?

Many thanks,
Petre


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2018-12-24 Thread Julien Grall

Hi Paul,

On 12/20/18 2:28 PM, Paul Durrant wrote:

-Original Message-
From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
Sent: 20 December 2018 14:26
To: Paul Durrant ; xen-devel@lists.xenproject.org
Cc: Stefano Stabellini ; Wei Liu
; Razvan Cojocaru ; Konrad
Rzeszutek Wilk ; George Dunlap
; Andrew Cooper ; Ian
Jackson ; Tim (Xen.org) ; Julien
Grall ; Tamas K Lengyel ; Jan
Beulich ; Roger Pau Monne 
Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
for sync requests.

On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:

The memory for the asynchronous ring and the synchronous channels
will
be allocated from domheap and mapped to the controlling domain
using the
foreignmemory_map_resource interface. Unlike the current
implementation,
the allocated pages are not part of the target DomU, so they will
not be
reclaimed when the vm_event domain is disabled.


Why re-invent the wheel here? The ioreq infrastructure already does
pretty much everything you need AFAICT.

   Paul


I wanted preseve as much as possible from the existing vm_event DOMCTL
interface and add only the necessary code to allocate and map the
vm_event_pages.


That means we have two subsystems duplicating a lot of functionality though. It 
would be much better to use ioreq server if possible than provide a 
compatibility interface via DOMCTL.


Also, to my knowledge, the ioreq server is only supported for x86 hvm
targets. I didn't want to add an extra limitation to the vm_event
system.


I believe Julien is already porting it to ARM.


FWIW, yes I have a port of ioreq for Arm. Still cleaning-up the code.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2018-12-20 Thread Jan Beulich
>>> On 20.12.18 at 15:28,  wrote:
>>  -Original Message-
>> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
>> Sent: 20 December 2018 14:26
>> To: Paul Durrant ; xen-devel@lists.xenproject.org 
>> Cc: Stefano Stabellini ; Wei Liu
>> ; Razvan Cojocaru ; Konrad
>> Rzeszutek Wilk ; George Dunlap
>> ; Andrew Cooper ; Ian
>> Jackson ; Tim (Xen.org) ; Julien
>> Grall ; Tamas K Lengyel ; Jan
>> Beulich ; Roger Pau Monne 
>> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
>> for sync requests.
>> 
>> On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
>> > > The memory for the asynchronous ring and the synchronous channels
>> > > will
>> > > be allocated from domheap and mapped to the controlling domain
>> > > using the
>> > > foreignmemory_map_resource interface. Unlike the current
>> > > implementation,
>> > > the allocated pages are not part of the target DomU, so they will
>> > > not be
>> > > reclaimed when the vm_event domain is disabled.
>> >
>> > Why re-invent the wheel here? The ioreq infrastructure already does
>> > pretty much everything you need AFAICT.
>> >
>> >   Paul
>> 
>> I wanted preseve as much as possible from the existing vm_event DOMCTL
>> interface and add only the necessary code to allocate and map the
>> vm_event_pages.
> 
> That means we have two subsystems duplicating a lot of functionality though. 
> It would be much better to use ioreq server if possible than provide a 
> compatibility interface via DOMCTL.

+1 from me, fwiw.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2018-12-20 Thread Paul Durrant
> -Original Message-
> From: Petre Ovidiu PIRCALABU [mailto:ppircal...@bitdefender.com]
> Sent: 20 December 2018 14:26
> To: Paul Durrant ; xen-devel@lists.xenproject.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Razvan Cojocaru ; Konrad
> Rzeszutek Wilk ; George Dunlap
> ; Andrew Cooper ; Ian
> Jackson ; Tim (Xen.org) ; Julien
> Grall ; Tamas K Lengyel ; Jan
> Beulich ; Roger Pau Monne 
> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
> for sync requests.
> 
> On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
> > > The memory for the asynchronous ring and the synchronous channels
> > > will
> > > be allocated from domheap and mapped to the controlling domain
> > > using the
> > > foreignmemory_map_resource interface. Unlike the current
> > > implementation,
> > > the allocated pages are not part of the target DomU, so they will
> > > not be
> > > reclaimed when the vm_event domain is disabled.
> >
> > Why re-invent the wheel here? The ioreq infrastructure already does
> > pretty much everything you need AFAICT.
> >
> >   Paul
> 
> I wanted preseve as much as possible from the existing vm_event DOMCTL
> interface and add only the necessary code to allocate and map the
> vm_event_pages.

That means we have two subsystems duplicating a lot of functionality though. It 
would be much better to use ioreq server if possible than provide a 
compatibility interface via DOMCTL.

> Also, to my knowledge, the ioreq server is only supported for x86 hvm
> targets. I didn't want to add an extra limitation to the vm_event
> system.

I believe Julien is already porting it to ARM.

  Paul

> //Petre

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2018-12-20 Thread Petre Ovidiu PIRCALABU
On Thu, 2018-12-20 at 12:05 +, Paul Durrant wrote:
> > The memory for the asynchronous ring and the synchronous channels
> > will
> > be allocated from domheap and mapped to the controlling domain
> > using the
> > foreignmemory_map_resource interface. Unlike the current
> > implementation,
> > the allocated pages are not part of the target DomU, so they will
> > not be
> > reclaimed when the vm_event domain is disabled.
> 
> Why re-invent the wheel here? The ioreq infrastructure already does
> pretty much everything you need AFAICT.
> 
>   Paul

I wanted preseve as much as possible from the existing vm_event DOMCTL
interface and add only the necessary code to allocate and map the
vm_event_pages.
Also, to my knowledge, the ioreq server is only supported for x86 hvm
targets. I didn't want to add an extra limitation to the vm_event
system.
//Petre

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2018-12-20 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
> Of Petre Pircalabu
> Sent: 19 December 2018 18:52
> To: xen-devel@lists.xenproject.org
> Cc: Petre Pircalabu ; Stefano Stabellini
> ; Wei Liu ; Razvan Cojocaru
> ; Konrad Rzeszutek Wilk
> ; George Dunlap ; Andrew
> Cooper ; Ian Jackson ;
> Tim (Xen.org) ; Julien Grall ; Tamas K
> Lengyel ; Jan Beulich ; Roger Pau
> Monne 
> Subject: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for
> sync requests.
> 
> In high throughput introspection scenarios where lots of monitor
> vm_events are generated, the ring buffer can fill up before the monitor
> application gets a chance to handle all the requests thus blocking
> other vcpus which will have to wait for a slot to become available.
> 
> This patch adds support for a different mechanism to handle synchronous
> vm_event requests / responses. As each synchronous request pauses the
> vcpu until the corresponsing response is handled, it can be stored in
> a slotted memory buffer (one per vcpu) shared between the hypervisor and
> the controlling domain. The asynchronous vm_event requests will be sent
> to the controlling domain using a ring buffer, but without blocking the
> vcpu as no response is required.
> 
> The memory for the asynchronous ring and the synchronous channels will
> be allocated from domheap and mapped to the controlling domain using the
> foreignmemory_map_resource interface. Unlike the current implementation,
> the allocated pages are not part of the target DomU, so they will not be
> reclaimed when the vm_event domain is disabled.

Why re-invent the wheel here? The ioreq infrastructure already does pretty much 
everything you need AFAICT.

  Paul

> 
> Signed-off-by: Petre Pircalabu 
> ---
>  tools/libxc/include/xenctrl.h |  11 +
>  tools/libxc/xc_monitor.c  |  36 +++
>  tools/libxc/xc_private.h  |  14 ++
>  tools/libxc/xc_vm_event.c |  74 +-
>  xen/arch/x86/mm.c |   7 +
>  xen/common/vm_event.c | 515
> ++
>  xen/include/public/domctl.h   |  25 +-
>  xen/include/public/memory.h   |   2 +
>  xen/include/public/vm_event.h |  15 ++
>  xen/include/xen/vm_event.h|   4 +
>  10 files changed, 660 insertions(+), 43 deletions(-)
> 
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index de0b990..fad8bc4 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -2012,6 +2012,17 @@ int xc_get_mem_access(xc_interface *xch, uint32_t
> domain_id,
>   * Caller has to unmap this page when done.
>   */
>  void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t
> *port);
> +
> +struct xenforeignmemory_resource_handle *xc_monitor_enable_ex(
> +xc_interface *xch,
> +uint32_t domain_id,
> +void **_ring_buffer,
> +uint32_t ring_frames,
> +uint32_t *ring_port,
> +void **_sync_buffer,
> +uint32_t *sync_ports,
> +uint32_t nr_sync_channels);
> +
>  int xc_monitor_disable(xc_interface *xch, uint32_t domain_id);
>  int xc_monitor_resume(xc_interface *xch, uint32_t domain_id);
>  /*
> diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
> index 718fe8b..4ceb528 100644
> --- a/tools/libxc/xc_monitor.c
> +++ b/tools/libxc/xc_monitor.c
> @@ -49,6 +49,42 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t
> domain_id, uint32_t *port)
>  return buffer;
>  }
> 
> +struct xenforeignmemory_resource_handle *xc_monitor_enable_ex(
> +xc_interface *xch,
> +uint32_t domain_id,
> +void **_ring_buffer,
> +uint32_t ring_frames,
> +uint32_t *ring_port,
> +void **_sync_buffer,
> +uint32_t *sync_ports,
> +uint32_t nr_sync_channels)
> +{
> +xenforeignmemory_resource_handle *fres;
> +int saved_errno;
> +
> +/* Pause the domain for ring page setup */
> +if ( xc_domain_pause(xch, domain_id) )
> +{
> +PERROR("Unable to pause domain\n");
> +return NULL;
> +}
> +
> +fres = xc_vm_event_enable_ex(xch, domain_id,
> XEN_VM_EVENT_TYPE_MONITOR,
> +_ring_buffer, ring_frames, ring_port,
> +_sync_buffer, sync_ports,
> nr_sync_channels);
> +
> +saved_errno = errno;
> +if ( xc_domain_unpause(xch, domain_id) )
> +{
> +if ( fres )
> +saved_errno = errno;
> +PERROR("Unable to unpause domain");
> +}
> +
> +errno = saved_errno;
> +return fres;
> +}
> +
>  int xc_monitor_disable(xc_interface *xch, uint32_t domain_id)
>  {
&

[Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.

2018-12-19 Thread Petre Pircalabu
In high throughput introspection scenarios where lots of monitor
vm_events are generated, the ring buffer can fill up before the monitor
application gets a chance to handle all the requests thus blocking
other vcpus which will have to wait for a slot to become available.

This patch adds support for a different mechanism to handle synchronous
vm_event requests / responses. As each synchronous request pauses the
vcpu until the corresponsing response is handled, it can be stored in
a slotted memory buffer (one per vcpu) shared between the hypervisor and
the controlling domain. The asynchronous vm_event requests will be sent
to the controlling domain using a ring buffer, but without blocking the
vcpu as no response is required.

The memory for the asynchronous ring and the synchronous channels will
be allocated from domheap and mapped to the controlling domain using the
foreignmemory_map_resource interface. Unlike the current implementation,
the allocated pages are not part of the target DomU, so they will not be
reclaimed when the vm_event domain is disabled.

Signed-off-by: Petre Pircalabu 
---
 tools/libxc/include/xenctrl.h |  11 +
 tools/libxc/xc_monitor.c  |  36 +++
 tools/libxc/xc_private.h  |  14 ++
 tools/libxc/xc_vm_event.c |  74 +-
 xen/arch/x86/mm.c |   7 +
 xen/common/vm_event.c | 515 ++
 xen/include/public/domctl.h   |  25 +-
 xen/include/public/memory.h   |   2 +
 xen/include/public/vm_event.h |  15 ++
 xen/include/xen/vm_event.h|   4 +
 10 files changed, 660 insertions(+), 43 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index de0b990..fad8bc4 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2012,6 +2012,17 @@ int xc_get_mem_access(xc_interface *xch, uint32_t 
domain_id,
  * Caller has to unmap this page when done.
  */
 void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port);
+
+struct xenforeignmemory_resource_handle *xc_monitor_enable_ex(
+xc_interface *xch,
+uint32_t domain_id,
+void **_ring_buffer,
+uint32_t ring_frames,
+uint32_t *ring_port,
+void **_sync_buffer,
+uint32_t *sync_ports,
+uint32_t nr_sync_channels);
+
 int xc_monitor_disable(xc_interface *xch, uint32_t domain_id);
 int xc_monitor_resume(xc_interface *xch, uint32_t domain_id);
 /*
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
index 718fe8b..4ceb528 100644
--- a/tools/libxc/xc_monitor.c
+++ b/tools/libxc/xc_monitor.c
@@ -49,6 +49,42 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t 
domain_id, uint32_t *port)
 return buffer;
 }
 
+struct xenforeignmemory_resource_handle *xc_monitor_enable_ex(
+xc_interface *xch,
+uint32_t domain_id,
+void **_ring_buffer,
+uint32_t ring_frames,
+uint32_t *ring_port,
+void **_sync_buffer,
+uint32_t *sync_ports,
+uint32_t nr_sync_channels)
+{
+xenforeignmemory_resource_handle *fres;
+int saved_errno;
+
+/* Pause the domain for ring page setup */
+if ( xc_domain_pause(xch, domain_id) )
+{
+PERROR("Unable to pause domain\n");
+return NULL;
+}
+
+fres = xc_vm_event_enable_ex(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR,
+_ring_buffer, ring_frames, ring_port,
+_sync_buffer, sync_ports, nr_sync_channels);
+
+saved_errno = errno;
+if ( xc_domain_unpause(xch, domain_id) )
+{
+if ( fres )
+saved_errno = errno;
+PERROR("Unable to unpause domain");
+}
+
+errno = saved_errno;
+return fres;
+}
+
 int xc_monitor_disable(xc_interface *xch, uint32_t domain_id)
 {
 return xc_vm_event_control(xch, domain_id,
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 482451c..1f70223 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -420,6 +420,20 @@ int xc_vm_event_control(xc_interface *xch, uint32_t 
domain_id, unsigned int op,
 void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
  uint32_t *port);
 
+/*
+ * Enables vm_event for using the xenforeignmemory_map_resource interface.
+ * The vm_event type can be XEN_VM_EVENT_TYPE_(PAGING/MONITOR/SHARING).
+ *
+ * The function returns:
+ *  - A ring for asynchronous vm_events.
+ *  - A slotted buffer for synchronous vm_events (one slot per vcpu)
+ *  - xenforeignmemory_resource_handle used exclusively for resource cleanup
+ */
+xenforeignmemory_resource_handle *xc_vm_event_enable_ex(xc_interface *xch,
+uint32_t domain_id, int type,
+void **_ring_buffer, uint32_t ring_frames, uint32_t *ring_port,
+void **_sync_buffer, uint32_t *sync_ports, uint32_t nr_sync_channels);
+
 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
 
 #endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index 4fc2