RE: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-10-01 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich 
> Sent: 01 October 2020 09:49
> To: Julien Grall 
> Cc: Oleksandr ; xen-devel@lists.xenproject.org; 
> p...@xen.org; 'Oleksandr
> Tyshchenko' ; 'Andrew Cooper' 
> ; 'George
> Dunlap' ; 'Ian Jackson' 
> ; 'Stefano Stabellini'
> ; 'Wei Liu' ; 'Roger Pau Monné' 
> ; 'Jun
> Nakajima' ; 'Kevin Tian' ; 'Tim 
> Deegan' ;
> 'Julien Grall' 
> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
> 
> On 30.09.2020 19:47, Julien Grall wrote:
> > Regarding the fix itself, I am not sure what sort of synchronization we
> > can do. Are you suggesting to wait for the I/O to complete? If so, how
> > do we handle the case the IOREQ server died?
> 
> In simple cases retrying the entire request may be an option. However,
> if the server died after some parts of a multi-part operation were
> done already, I guess the resulting loss of state is bad enough to
> warrant crashing the guest. This shouldn't be much different from e.g.
> a device disappearing from a bare metal system - any partial I/O done
> to/from it will leave the machine in an unpredictable state, which it
> may be too difficult to recover from without rebooting. (Of course,
> staying with this analogue, it may also be okay to simple consider
> the operation "complete", leaving it to the guest to recover. The
> main issue on the hypervisor side then would be to ensure we don't
> expose any uninitialized [due to not having got written to] data to
> the guest.)
> 

I'll try to take a look today and come up with a patch.

  Paul




Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-10-01 Thread Jan Beulich
On 30.09.2020 19:47, Julien Grall wrote:
> Regarding the fix itself, I am not sure what sort of synchronization we 
> can do. Are you suggesting to wait for the I/O to complete? If so, how 
> do we handle the case the IOREQ server died?

In simple cases retrying the entire request may be an option. However,
if the server died after some parts of a multi-part operation were
done already, I guess the resulting loss of state is bad enough to
warrant crashing the guest. This shouldn't be much different from e.g.
a device disappearing from a bare metal system - any partial I/O done
to/from it will leave the machine in an unpredictable state, which it
may be too difficult to recover from without rebooting. (Of course,
staying with this analogue, it may also be okay to simple consider
the operation "complete", leaving it to the guest to recover. The
main issue on the hypervisor side then would be to ensure we don't
expose any uninitialized [due to not having got written to] data to
the guest.)

Jan



RE: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-10-01 Thread Paul Durrant
> -Original Message-
> From: Julien Grall 
> Sent: 30 September 2020 18:48
> To: Oleksandr ; xen-devel@lists.xenproject.org
> Cc: p...@xen.org; 'Oleksandr Tyshchenko' ; 
> 'Andrew Cooper'
> ; 'George Dunlap' ; 'Ian 
> Jackson'
> ; 'Jan Beulich' ; 'Stefano 
> Stabellini'
> ; 'Wei Liu' ; 'Roger Pau Monné' 
> ; 'Jun
> Nakajima' ; 'Kevin Tian' ; 'Tim 
> Deegan' ;
> 'Julien Grall' 
> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
> 
> Hi,
> 
> On 30/09/2020 14:39, Oleksandr wrote:
> >
> > Hi Julien
> >
> > On 25.09.20 11:19, Paul Durrant wrote:
> >>> -Original Message-
> >>> From: Julien Grall 
> >>> Sent: 24 September 2020 19:01
> >>> To: Oleksandr Tyshchenko ;
> >>> xen-devel@lists.xenproject.org
> >>> Cc: Oleksandr Tyshchenko ; Andrew
> >>> Cooper ;
> >>> George Dunlap ; Ian Jackson
> >>> ; Jan Beulich
> >>> ; Stefano Stabellini ; Wei
> >>> Liu ; Roger Pau
> >>> Monné ; Paul Durrant ; Jun
> >>> Nakajima ;
> >>> Kevin Tian ; Tim Deegan ; Julien
> >>> Grall 
> >>> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
> >>>
> >>>
> >>>
> >>> On 10/09/2020 21:21, Oleksandr Tyshchenko wrote:
> >>>> +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
> >>>> +{
> >>>> +unsigned int prev_state = STATE_IOREQ_NONE;
> >>>> +unsigned int state = p->state;
> >>>> +uint64_t data = ~0;
> >>>> +
> >>>> +smp_rmb();
> >>>> +
> >>>> +/*
> >>>> + * The only reason we should see this condition be false is
> >>>> when an
> >>>> + * emulator dying races with I/O being requested.
> >>>> + */
> >>>> +while ( likely(state != STATE_IOREQ_NONE) )
> >>>> +{
> >>>> +if ( unlikely(state < prev_state) )
> >>>> +{
> >>>> +gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition
> >>>> %u -> %u\n",
> >>>> + prev_state, state);
> >>>> +sv->pending = false;
> >>>> +domain_crash(sv->vcpu->domain);
> >>>> +return false; /* bail */
> >>>> +}
> >>>> +
> >>>> +switch ( prev_state = state )
> >>>> +{
> >>>> +case STATE_IORESP_READY: /* IORESP_READY -> NONE */
> >>>> +p->state = STATE_IOREQ_NONE;
> >>>> +data = p->data;
> >>>> +break;
> >>>> +
> >>>> +case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} ->
> >>>> IORESP_READY */
> >>>> +case STATE_IOREQ_INPROCESS:
> >>>> +wait_on_xen_event_channel(sv->ioreq_evtchn,
> >>>> +  ({ state = p->state;
> >>>> + smp_rmb();
> >>>> + state != prev_state; }));
> >>>> +continue;
> >>> As I pointed out previously [1], this helper was implemented with the
> >>> expectation that wait_on_xen_event_channel() will not return if the vCPU
> >>> got rescheduled.
> >>>
> >>> However, this assumption doesn't hold on Arm.
> >>>
> >>> I can see two solution:
> >>>  1) Re-execute the caller
> >>>  2) Prevent an IOREQ to disappear until the loop finish.
> >>>
> >>> @Paul any opinions?
> >> The ioreq control plane is largely predicated on there being no
> >> pending I/O when the state of a server is modified, and it is assumed
> >> that domain_pause() is sufficient to achieve this. If that assumption
> >> doesn't hold then we need additional synchronization.
> 
> I don't think this assumption even hold on x86 because domain_pause()
> will not wait for I/O to finish.
> 
> On x86, the context switch will reset the stack and therefore
> wait_on_xen_event_channel() is not going to return. Instead,
> handle_hvm_io_completion() will be called from the tail callback in
> context_switch(). get_pending_vcpu() would return NULL as the IOREQ
> server disappeared. Altho

Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-30 Thread Julien Grall

Hi,

On 30/09/2020 14:39, Oleksandr wrote:


Hi Julien

On 25.09.20 11:19, Paul Durrant wrote:

-Original Message-
From: Julien Grall 
Sent: 24 September 2020 19:01
To: Oleksandr Tyshchenko ; 
xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko ; Andrew 
Cooper ;
George Dunlap ; Ian Jackson 
; Jan Beulich
; Stefano Stabellini ; Wei 
Liu ; Roger Pau
Monné ; Paul Durrant ; Jun 
Nakajima ;
Kevin Tian ; Tim Deegan ; Julien 
Grall 

Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common



On 10/09/2020 21:21, Oleksandr Tyshchenko wrote:

+static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+{
+    unsigned int prev_state = STATE_IOREQ_NONE;
+    unsigned int state = p->state;
+    uint64_t data = ~0;
+
+    smp_rmb();
+
+    /*
+ * The only reason we should see this condition be false is 
when an

+ * emulator dying races with I/O being requested.
+ */
+    while ( likely(state != STATE_IOREQ_NONE) )
+    {
+    if ( unlikely(state < prev_state) )
+    {
+    gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition 
%u -> %u\n",

+ prev_state, state);
+    sv->pending = false;
+    domain_crash(sv->vcpu->domain);
+    return false; /* bail */
+    }
+
+    switch ( prev_state = state )
+    {
+    case STATE_IORESP_READY: /* IORESP_READY -> NONE */
+    p->state = STATE_IOREQ_NONE;
+    data = p->data;
+    break;
+
+    case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> 
IORESP_READY */

+    case STATE_IOREQ_INPROCESS:
+    wait_on_xen_event_channel(sv->ioreq_evtchn,
+  ({ state = p->state;
+ smp_rmb();
+ state != prev_state; }));
+    continue;

As I pointed out previously [1], this helper was implemented with the
expectation that wait_on_xen_event_channel() will not return if the vCPU
got rescheduled.

However, this assumption doesn't hold on Arm.

I can see two solution:
 1) Re-execute the caller
 2) Prevent an IOREQ to disappear until the loop finish.

@Paul any opinions?
The ioreq control plane is largely predicated on there being no 
pending I/O when the state of a server is modified, and it is assumed 
that domain_pause() is sufficient to achieve this. If that assumption 
doesn't hold then we need additional synchronization.


I don't think this assumption even hold on x86 because domain_pause() 
will not wait for I/O to finish.


On x86, the context switch will reset the stack and therefore 
wait_on_xen_event_channel() is not going to return. Instead, 
handle_hvm_io_completion() will be called from the tail callback in 
context_switch(). get_pending_vcpu() would return NULL as the IOREQ 
server disappeared. Although, it is not clear whether the vCPU will 
continue to run (or not).


Did I miss anything?

Regarding the fix itself, I am not sure what sort of synchronization we 
can do. Are you suggesting to wait for the I/O to complete? If so, how 
do we handle the case the IOREQ server died?


May I please clarify whether a concern still stands (with what was said 
above) and we need an additional synchronization on Arm?


Yes the concern is still there (See above).

Cheers,

--
Julien Grall



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-30 Thread Oleksandr



Hi Julien

On 25.09.20 11:19, Paul Durrant wrote:

-Original Message-
From: Julien Grall 
Sent: 24 September 2020 19:01
To: Oleksandr Tyshchenko ; xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko ; Andrew Cooper 
;
George Dunlap ; Ian Jackson 
; Jan Beulich
; Stefano Stabellini ; Wei Liu 
; Roger Pau
Monné ; Paul Durrant ; Jun Nakajima 
;
Kevin Tian ; Tim Deegan ; Julien Grall 

Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common



On 10/09/2020 21:21, Oleksandr Tyshchenko wrote:

+static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+{
+unsigned int prev_state = STATE_IOREQ_NONE;
+unsigned int state = p->state;
+uint64_t data = ~0;
+
+smp_rmb();
+
+/*
+ * The only reason we should see this condition be false is when an
+ * emulator dying races with I/O being requested.
+ */
+while ( likely(state != STATE_IOREQ_NONE) )
+{
+if ( unlikely(state < prev_state) )
+{
+gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
+ prev_state, state);
+sv->pending = false;
+domain_crash(sv->vcpu->domain);
+return false; /* bail */
+}
+
+switch ( prev_state = state )
+{
+case STATE_IORESP_READY: /* IORESP_READY -> NONE */
+p->state = STATE_IOREQ_NONE;
+data = p->data;
+break;
+
+case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
+case STATE_IOREQ_INPROCESS:
+wait_on_xen_event_channel(sv->ioreq_evtchn,
+  ({ state = p->state;
+ smp_rmb();
+ state != prev_state; }));
+continue;

As I pointed out previously [1], this helper was implemented with the
expectation that wait_on_xen_event_channel() will not return if the vCPU
got rescheduled.

However, this assumption doesn't hold on Arm.

I can see two solution:
 1) Re-execute the caller
 2) Prevent an IOREQ to disappear until the loop finish.

@Paul any opinions?

The ioreq control plane is largely predicated on there being no pending I/O 
when the state of a server is modified, and it is assumed that domain_pause() 
is sufficient to achieve this. If that assumption doesn't hold then we need 
additional synchronization.

   Paul

May I please clarify whether a concern still stands (with what was said 
above) and we need an additional synchronization on Arm?



--
Regards,

Oleksandr Tyshchenko




RE: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-25 Thread Paul Durrant
> -Original Message-
> From: Julien Grall 
> Sent: 24 September 2020 19:01
> To: Oleksandr Tyshchenko ; xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko ; Andrew Cooper 
> ;
> George Dunlap ; Ian Jackson 
> ; Jan Beulich
> ; Stefano Stabellini ; Wei Liu 
> ; Roger Pau
> Monné ; Paul Durrant ; Jun Nakajima 
> ;
> Kevin Tian ; Tim Deegan ; Julien Grall 
> 
> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
> 
> 
> 
> On 10/09/2020 21:21, Oleksandr Tyshchenko wrote:
> > +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
> > +{
> > +unsigned int prev_state = STATE_IOREQ_NONE;
> > +unsigned int state = p->state;
> > +uint64_t data = ~0;
> > +
> > +smp_rmb();
> > +
> > +/*
> > + * The only reason we should see this condition be false is when an
> > + * emulator dying races with I/O being requested.
> > + */
> > +while ( likely(state != STATE_IOREQ_NONE) )
> > +{
> > +if ( unlikely(state < prev_state) )
> > +{
> > +gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> 
> > %u\n",
> > + prev_state, state);
> > +sv->pending = false;
> > +domain_crash(sv->vcpu->domain);
> > +return false; /* bail */
> > +}
> > +
> > +switch ( prev_state = state )
> > +{
> > +case STATE_IORESP_READY: /* IORESP_READY -> NONE */
> > +p->state = STATE_IOREQ_NONE;
> > +data = p->data;
> > +break;
> > +
> > +case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> 
> > IORESP_READY */
> > +case STATE_IOREQ_INPROCESS:
> > +wait_on_xen_event_channel(sv->ioreq_evtchn,
> > +  ({ state = p->state;
> > + smp_rmb();
> > + state != prev_state; }));
> > +continue;
> 
> As I pointed out previously [1], this helper was implemented with the
> expectation that wait_on_xen_event_channel() will not return if the vCPU
> got rescheduled.
> 
> However, this assumption doesn't hold on Arm.
> 
> I can see two solution:
> 1) Re-execute the caller
> 2) Prevent an IOREQ to disappear until the loop finish.
> 
> @Paul any opinions?

The ioreq control plane is largely predicated on there being no pending I/O 
when the state of a server is modified, and it is assumed that domain_pause() 
is sufficient to achieve this. If that assumption doesn't hold then we need 
additional synchronization.

  Paul

> 
> Cheers,
> 
> [1]
> https://lore.kernel.org/xen-devel/6bfc3920-8f29-188c-cff4-2b99dabe1...@xen.org/
> 
> 
> --
> Julien Grall




Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-24 Thread Julien Grall




On 10/09/2020 21:21, Oleksandr Tyshchenko wrote:

+static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+{
+unsigned int prev_state = STATE_IOREQ_NONE;
+unsigned int state = p->state;
+uint64_t data = ~0;
+
+smp_rmb();
+
+/*
+ * The only reason we should see this condition be false is when an
+ * emulator dying races with I/O being requested.
+ */
+while ( likely(state != STATE_IOREQ_NONE) )
+{
+if ( unlikely(state < prev_state) )
+{
+gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
+ prev_state, state);
+sv->pending = false;
+domain_crash(sv->vcpu->domain);
+return false; /* bail */
+}
+
+switch ( prev_state = state )
+{
+case STATE_IORESP_READY: /* IORESP_READY -> NONE */
+p->state = STATE_IOREQ_NONE;
+data = p->data;
+break;
+
+case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
+case STATE_IOREQ_INPROCESS:
+wait_on_xen_event_channel(sv->ioreq_evtchn,
+  ({ state = p->state;
+ smp_rmb();
+ state != prev_state; }));
+continue;


As I pointed out previously [1], this helper was implemented with the 
expectation that wait_on_xen_event_channel() will not return if the vCPU 
got rescheduled.


However, this assumption doesn't hold on Arm.

I can see two solution:
   1) Re-execute the caller
   2) Prevent an IOREQ to disappear until the loop finish.

@Paul any opinions?

Cheers,

[1] 
https://lore.kernel.org/xen-devel/6bfc3920-8f29-188c-cff4-2b99dabe1...@xen.org/



--
Julien Grall



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-24 Thread Jan Beulich
On 24.09.2020 17:38, Oleksandr wrote:
> On 24.09.20 13:58, Jan Beulich wrote:
>> On 23.09.2020 14:28, Oleksandr wrote:
>>> On 22.09.20 18:52, Jan Beulich wrote:
 On 22.09.2020 17:05, Oleksandr wrote:
> @@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct
> hvm_ioreq_server *s)
> for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
> {
> if ( !test_and_clear_bit(i, >ioreq_gfn.legacy_mask) )
> -    return _gfn(d->arch.hvm.params[i]);
> +    return _gfn(ioreq_get_params(d, i));
> }
>
> return INVALID_GFN;
> @@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct
> hvm_ioreq_server *s,
>
> for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
> {
> -    if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
> +    if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
>  break;
> }
> if ( i > HVM_PARAM_BUFIOREQ_PFN )
 And these two are needed by Arm? Shouldn't Arm exclusively use
 the new model, via acquire_resource?
>>> I dropped HVMOP plumbing on Arm as it was requested. Only acquire
>>> interface should be used.
>>> This code is not supposed to be called on Arm, but it is a part of
>>> common code and we need to find a way how to abstract away *arch.hvm.params*
>> ... here I wonder whether you aren't moving more pieces to common
>> code than are actually arch-independent. I think a prereq step
>> missing so far is to clearly identify which pieces of the code
>> are arch-independent, and work towards abstracting away all of the
>> arch-dependent ones.
> Unfortunately, not all things are clear and obvious from the very beginning.
> I have to admit, I didn't even imagine earlier that *arch.hvm.* usage in 
> the common code is a layering violation issue.
> Hopefully, now it is clear as well as the steps to avoid it in future.
> 
> ...
> 
> 
> I saw your advise (but haven't answered yet there) regarding splitting 
> struct hvm_vcpu_io in
> [PATCH V1 09/16] arm/ioreq: Introduce arch specific bits for IOREQ/DM 
> features. I think, it makes sense.
> The only remaining bits I would like to clarify here is 
> *arch.hvm.params*. Should we really want to move HVM params field to the 
> common code
> rather than abstracting away by proposed macro?

I don't think I suggested doing so. In fact I recall having voiced
my expectation that Arm wouldn't use this at all. So yes, I agree
this better wouldn't be moved out of arch.hvm, but instead accesses
be abstracted by another means.

Jan



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-24 Thread Oleksandr



On 24.09.20 13:58, Jan Beulich wrote:

Hi Jan


On 23.09.2020 14:28, Oleksandr wrote:

On 22.09.20 18:52, Jan Beulich wrote:

On 22.09.2020 17:05, Oleksandr wrote:

3. *arch.hvm.hvm_io*: We could also use the following:

      #define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
      #define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)

      This way struct hvm_vcpu_io won't be used in common code as well.

But if Arm needs similar field, why keep them in arch.hvm.hvm_io?

Yes, Arm needs the "some" fields, but not "all of them" as x86 has.
For example Arm needs only the following (at least in the context of
this series):

+struct hvm_vcpu_io {
+    /* I/O request in flight to device model. */
+    enum hvm_io_completion io_completion;
+    ioreq_t    io_req;
+
+    /*
+ * HVM emulation:
+ *  Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn.
+ *  The latter is known to be an MMIO frame (not RAM).
+ *  This translation is only valid for accesses as per @mmio_access.
+ */
+    struct npfec    mmio_access;
+    unsigned long   mmio_gla;
+    unsigned long   mmio_gpfn;
+};

But for x86 the number of fields is quite bigger. If they were in same
way applicable for both archs (as what we have with ioreq_server struct)
I would move it to the common domain. I didn't think of a better idea
than just abstracting accesses to these (used in common ioreq.c) two
fields by macro.

I'm surprised Arm would need all the three last fields that you
mention. Both here and ...


@@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct
hvm_ioreq_server *s)
    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
    {
    if ( !test_and_clear_bit(i, >ioreq_gfn.legacy_mask) )
-    return _gfn(d->arch.hvm.params[i]);
+    return _gfn(ioreq_get_params(d, i));
    }

    return INVALID_GFN;
@@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct
hvm_ioreq_server *s,

    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
    {
-    if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+    if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
     break;
    }
    if ( i > HVM_PARAM_BUFIOREQ_PFN )

And these two are needed by Arm? Shouldn't Arm exclusively use
the new model, via acquire_resource?

I dropped HVMOP plumbing on Arm as it was requested. Only acquire
interface should be used.
This code is not supposed to be called on Arm, but it is a part of
common code and we need to find a way how to abstract away *arch.hvm.params*

... here I wonder whether you aren't moving more pieces to common
code than are actually arch-independent. I think a prereq step
missing so far is to clearly identify which pieces of the code
are arch-independent, and work towards abstracting away all of the
arch-dependent ones.

Unfortunately, not all things are clear and obvious from the very beginning.
I have to admit, I didn't even imagine earlier that *arch.hvm.* usage in 
the common code is a layering violation issue.

Hopefully, now it is clear as well as the steps to avoid it in future.

...


I saw your advise (but haven't answered yet there) regarding splitting 
struct hvm_vcpu_io in
[PATCH V1 09/16] arm/ioreq: Introduce arch specific bits for IOREQ/DM 
features. I think, it makes sense.
The only remaining bits I would like to clarify here is 
*arch.hvm.params*. Should we really want to move HVM params field to the 
common code
rather than abstracting away by proposed macro? Although stores a few 
IOREQ params, it is not a (completely) IOREQ stuff and is specific to 
the architecture.


--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-24 Thread Jan Beulich
On 23.09.2020 14:28, Oleksandr wrote:
> On 22.09.20 18:52, Jan Beulich wrote:
>> On 22.09.2020 17:05, Oleksandr wrote:
>>> 3. *arch.hvm.hvm_io*: We could also use the following:
>>>
>>>      #define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
>>>      #define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)
>>>
>>>      This way struct hvm_vcpu_io won't be used in common code as well.
>> But if Arm needs similar field, why keep them in arch.hvm.hvm_io?
> Yes, Arm needs the "some" fields, but not "all of them" as x86 has.
> For example Arm needs only the following (at least in the context of 
> this series):
> 
> +struct hvm_vcpu_io {
> +    /* I/O request in flight to device model. */
> +    enum hvm_io_completion io_completion;
> +    ioreq_t    io_req;
> +
> +    /*
> + * HVM emulation:
> + *  Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn.
> + *  The latter is known to be an MMIO frame (not RAM).
> + *  This translation is only valid for accesses as per @mmio_access.
> + */
> +    struct npfec    mmio_access;
> +    unsigned long   mmio_gla;
> +    unsigned long   mmio_gpfn;
> +};
> 
> But for x86 the number of fields is quite bigger. If they were in same 
> way applicable for both archs (as what we have with ioreq_server struct)
> I would move it to the common domain. I didn't think of a better idea 
> than just abstracting accesses to these (used in common ioreq.c) two 
> fields by macro.

I'm surprised Arm would need all the three last fields that you
mention. Both here and ...

>>> @@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct
>>> hvm_ioreq_server *s)
>>>    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
>>>    {
>>>    if ( !test_and_clear_bit(i, >ioreq_gfn.legacy_mask) )
>>> -    return _gfn(d->arch.hvm.params[i]);
>>> +    return _gfn(ioreq_get_params(d, i));
>>>    }
>>>
>>>    return INVALID_GFN;
>>> @@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct
>>> hvm_ioreq_server *s,
>>>
>>>    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
>>>    {
>>> -    if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
>>> +    if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
>>>     break;
>>>    }
>>>    if ( i > HVM_PARAM_BUFIOREQ_PFN )
>> And these two are needed by Arm? Shouldn't Arm exclusively use
>> the new model, via acquire_resource?
> I dropped HVMOP plumbing on Arm as it was requested. Only acquire 
> interface should be used.
> This code is not supposed to be called on Arm, but it is a part of 
> common code and we need to find a way how to abstract away *arch.hvm.params*

... here I wonder whether you aren't moving more pieces to common
code than are actually arch-independent. I think a prereq step
missing so far is to clearly identify which pieces of the code
are arch-independent, and work towards abstracting away all of the
arch-dependent ones.

Jan



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-23 Thread Oleksandr



On 22.09.20 18:52, Jan Beulich wrote:

Hi Jan


On 22.09.2020 17:05, Oleksandr wrote:

2. *arch.hvm.params*: Two functions that use it
(hvm_alloc_legacy_ioreq_gfn/hvm_free_legacy_ioreq_gfn) either go into
arch code completely or
      specific macro is used in common code:

     #define ioreq_get_params(d, i) ((d)->arch.hvm.params[i])

If Arm has the concept of params, then perhaps. But I didn't think
Arm does ...
I think it has in some degree, there is a handling of 
HVMOP_set_param/HVMOP_get_param and

also there is a code to setup HVM_PARAM_CALLBACK_IRQ.





     I would prefer macro than moving functions to arch code (which are
equal and should remain in sync).

Yes, if the rest of the code is identical, I agree it's better to
merely abstract away small pieces like this one.


ok





3. *arch.hvm.hvm_io*: We could also use the following:

     #define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
     #define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)

     This way struct hvm_vcpu_io won't be used in common code as well.

But if Arm needs similar field, why keep them in arch.hvm.hvm_io?

Yes, Arm needs the "some" fields, but not "all of them" as x86 has.
For example Arm needs only the following (at least in the context of 
this series):


+struct hvm_vcpu_io {
+    /* I/O request in flight to device model. */
+    enum hvm_io_completion io_completion;
+    ioreq_t    io_req;
+
+    /*
+ * HVM emulation:
+ *  Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn.
+ *  The latter is known to be an MMIO frame (not RAM).
+ *  This translation is only valid for accesses as per @mmio_access.
+ */
+    struct npfec    mmio_access;
+    unsigned long   mmio_gla;
+    unsigned long   mmio_gpfn;
+};

But for x86 the number of fields is quite bigger. If they were in same 
way applicable for both archs (as what we have with ioreq_server struct)
I would move it to the common domain. I didn't think of a better idea 
than just abstracting accesses to these (used in common ioreq.c) two 
fields by macro.






--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -194,7 +194,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu
*sv, ioreq_t *p)
   bool handle_hvm_io_completion(struct vcpu *v)
   {
   struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = >arch.hvm.hvm_io;
+    ioreq_t io_req = ioreq_get_io_req(v);
   struct hvm_ioreq_server *s;
   struct hvm_ioreq_vcpu *sv;
   enum hvm_io_completion io_completion;
@@ -209,14 +209,14 @@ bool handle_hvm_io_completion(struct vcpu *v)
   if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
   return false;

-    vio->io_req.state = hvm_ioreq_needs_completion(>io_req) ?
+    io_req.state = hvm_ioreq_needs_completion(_req) ?
   STATE_IORESP_READY : STATE_IOREQ_NONE;

This is unlikely to be correct - you're now updating an on-stack
copy of the ioreq_t instead of what vio points at.
Oh, thank you for pointing this, I should have used ioreq_t *io_req = 
_get_io_req(v);
I don't like ioreq_get_io_req much), probably ioreq_req would sound a 
little bit better?






   msix_write_completion(v);
   vcpu_end_shutdown_deferral(v);

-    io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
+    io_completion = ioreq_get_io_completion(v);
+    ioreq_get_io_completion(v) = HVMIO_no_completion;

I think it's at least odd to have an lvalue with this kind of a
name. Perhaps want to drop "get" if it's really meant to be used
like this.


ok





@@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct
hvm_ioreq_server *s)
   for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
   {
   if ( !test_and_clear_bit(i, >ioreq_gfn.legacy_mask) )
-    return _gfn(d->arch.hvm.params[i]);
+    return _gfn(ioreq_get_params(d, i));
   }

   return INVALID_GFN;
@@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct
hvm_ioreq_server *s,

   for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
   {
-    if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+    if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
    break;
   }
   if ( i > HVM_PARAM_BUFIOREQ_PFN )

And these two are needed by Arm? Shouldn't Arm exclusively use
the new model, via acquire_resource?
I dropped HVMOP plumbing on Arm as it was requested. Only acquire 
interface should be used.
This code is not supposed to be called on Arm, but it is a part of 
common code and we need to find a way how to abstract away *arch.hvm.params*

Am I correct?


--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-22 Thread Jan Beulich
On 22.09.2020 17:05, Oleksandr wrote:
> 2. *arch.hvm.params*: Two functions that use it 
> (hvm_alloc_legacy_ioreq_gfn/hvm_free_legacy_ioreq_gfn) either go into 
> arch code completely or
>      specific macro is used in common code:
> 
>     #define ioreq_get_params(d, i) ((d)->arch.hvm.params[i])

If Arm has the concept of params, then perhaps. But I didn't think
Arm does ...

>     I would prefer macro than moving functions to arch code (which are 
> equal and should remain in sync).

Yes, if the rest of the code is identical, I agree it's better to
merely abstract away small pieces like this one.

> 3. *arch.hvm.hvm_io*: We could also use the following:
> 
>     #define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
>     #define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)
> 
>     This way struct hvm_vcpu_io won't be used in common code as well.

But if Arm needs similar field, why keep them in arch.hvm.hvm_io?

> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -194,7 +194,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu 
> *sv, ioreq_t *p)
>   bool handle_hvm_io_completion(struct vcpu *v)
>   {
>   struct domain *d = v->domain;
> -    struct hvm_vcpu_io *vio = >arch.hvm.hvm_io;
> +    ioreq_t io_req = ioreq_get_io_req(v);
>   struct hvm_ioreq_server *s;
>   struct hvm_ioreq_vcpu *sv;
>   enum hvm_io_completion io_completion;
> @@ -209,14 +209,14 @@ bool handle_hvm_io_completion(struct vcpu *v)
>   if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
>   return false;
> 
> -    vio->io_req.state = hvm_ioreq_needs_completion(>io_req) ?
> +    io_req.state = hvm_ioreq_needs_completion(_req) ?
>   STATE_IORESP_READY : STATE_IOREQ_NONE;

This is unlikely to be correct - you're now updating an on-stack
copy of the ioreq_t instead of what vio points at.

>   msix_write_completion(v);
>   vcpu_end_shutdown_deferral(v);
> 
> -    io_completion = vio->io_completion;
> -    vio->io_completion = HVMIO_no_completion;
> +    io_completion = ioreq_get_io_completion(v);
> +    ioreq_get_io_completion(v) = HVMIO_no_completion;

I think it's at least odd to have an lvalue with this kind of a
name. Perhaps want to drop "get" if it's really meant to be used
like this.

> @@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct 
> hvm_ioreq_server *s)
>   for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
>   {
>   if ( !test_and_clear_bit(i, >ioreq_gfn.legacy_mask) )
> -    return _gfn(d->arch.hvm.params[i]);
> +    return _gfn(ioreq_get_params(d, i));
>   }
> 
>   return INVALID_GFN;
> @@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct 
> hvm_ioreq_server *s,
> 
>   for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
>   {
> -    if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
> +    if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
>    break;
>   }
>   if ( i > HVM_PARAM_BUFIOREQ_PFN )

And these two are needed by Arm? Shouldn't Arm exclusively use
the new model, via acquire_resource?

Jan



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-22 Thread Oleksandr



On 22.09.20 13:54, Jan Beulich wrote:

Hi Jan


On 22.09.2020 11:58, Oleksandr wrote:

On 22.09.20 09:33, Jan Beulich wrote:

On 21.09.2020 21:02, Oleksandr wrote:

On 14.09.20 17:17, Jan Beulich wrote:

On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:

+#define GET_IOREQ_SERVER(d, id) \
+(d)->arch.hvm.ioreq_server.server[id]

arch.hvm.* feels like a layering violation when used in this header.

Got it. The only reason why GET_IOREQ_SERVER is here is inline
get_ioreq_server(). I will make it non-inline and move both to
common/ioreq.c.

Which won't make the layering violation go away. It's still
common rather than per-arch code then. As suggested elsewhere,
I think the whole ioreq_server struct wants to move into
struct domain itself, perhaps inside a new #ifdef (iirc one of
the patches introduces a suitable Kconfig option).

Well, your advise regarding ioreq_server sounds reasonable, but the
common ioreq.c
still will have other *arch.hvm.* for both vcpu and domain. So looks
like other involved structs should be moved
into *common* struct domain/vcpu itself, correct? Some of them could be
moved easily since contain the same fields (arch.hvm.ioreq_gfn),
but some of them couldn't and seems to require to pull a lot of changes
to the Xen code (arch.hvm.params, arch.hvm.hvm_io), I am afraid.
Or I missed something?

arch.hvm.params, iirc, is an x86 concept, and hence would need
abstracting away anyway. I expect this will be common pattern:
Either you want things to become generic (structure fields
living directly in struct domain, or at least not under arch.hvm),
or things need abstracting for per-arch handling.

Got it.

Let me please clarify one more question.
In order to avoid the layering violation in current patch we could apply 
a complex approach.


1. *arch.hvm.ioreq_gfn* and *arch.hvm.ioreq_server*: Both structs go 
into common struct domain.


2. *arch.hvm.params*: Two functions that use it 
(hvm_alloc_legacy_ioreq_gfn/hvm_free_legacy_ioreq_gfn) either go into 
arch code completely or

    specific macro is used in common code:

   #define ioreq_get_params(d, i) ((d)->arch.hvm.params[i])

   I would prefer macro than moving functions to arch code (which are 
equal and should remain in sync).


3. *arch.hvm.hvm_io*: We could also use the following:

   #define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
   #define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)

   This way struct hvm_vcpu_io won't be used in common code as well.

   Are #2, 3 appropriate to go with?


Dirty non-tested patch (which applied on top of the whole series and 
targets Arm only) shows how it could look like.



diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 2e85ea7..5894bdab 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -194,7 +194,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu 
*sv, ioreq_t *p)

 bool handle_hvm_io_completion(struct vcpu *v)
 {
 struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = >arch.hvm.hvm_io;
+    ioreq_t io_req = ioreq_get_io_req(v);
 struct hvm_ioreq_server *s;
 struct hvm_ioreq_vcpu *sv;
 enum hvm_io_completion io_completion;
@@ -209,14 +209,14 @@ bool handle_hvm_io_completion(struct vcpu *v)
 if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
 return false;

-    vio->io_req.state = hvm_ioreq_needs_completion(>io_req) ?
+    io_req.state = hvm_ioreq_needs_completion(_req) ?
 STATE_IORESP_READY : STATE_IOREQ_NONE;

 msix_write_completion(v);
 vcpu_end_shutdown_deferral(v);

-    io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
+    io_completion = ioreq_get_io_completion(v);
+    ioreq_get_io_completion(v) = HVMIO_no_completion;

 switch ( io_completion )
 {
@@ -227,8 +227,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
 return ioreq_handle_complete_mmio();

 case HVMIO_pio_completion:
-    return handle_pio(vio->io_req.addr, vio->io_req.size,
-  vio->io_req.dir);
+    return handle_pio(io_req.addr, io_req.size,
+  io_req.dir);

 default:
 return arch_handle_hvm_io_completion(io_completion);
@@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct 
hvm_ioreq_server *s)

 for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
 {
 if ( !test_and_clear_bit(i, >ioreq_gfn.legacy_mask) )
-    return _gfn(d->arch.hvm.params[i]);
+    return _gfn(ioreq_get_params(d, i));
 }

 return INVALID_GFN;
@@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct 
hvm_ioreq_server *s,


 for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
 {
-    if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+    if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
  break;
 }
 if ( i > HVM_PARAM_BUFIOREQ_PFN )
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h

Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-22 Thread Jan Beulich
On 22.09.2020 11:58, Oleksandr wrote:
> On 22.09.20 09:33, Jan Beulich wrote:
>> On 21.09.2020 21:02, Oleksandr wrote:
>>> On 14.09.20 17:17, Jan Beulich wrote:
 On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:
> +#define GET_IOREQ_SERVER(d, id) \
> +(d)->arch.hvm.ioreq_server.server[id]
 arch.hvm.* feels like a layering violation when used in this header.
>>> Got it. The only reason why GET_IOREQ_SERVER is here is inline
>>> get_ioreq_server(). I will make it non-inline and move both to
>>> common/ioreq.c.
>> Which won't make the layering violation go away. It's still
>> common rather than per-arch code then. As suggested elsewhere,
>> I think the whole ioreq_server struct wants to move into
>> struct domain itself, perhaps inside a new #ifdef (iirc one of
>> the patches introduces a suitable Kconfig option).
> Well, your advise regarding ioreq_server sounds reasonable, but the 
> common ioreq.c
> still will have other *arch.hvm.* for both vcpu and domain. So looks 
> like other involved structs should be moved
> into *common* struct domain/vcpu itself, correct? Some of them could be 
> moved easily since contain the same fields (arch.hvm.ioreq_gfn),
> but some of them couldn't and seems to require to pull a lot of changes 
> to the Xen code (arch.hvm.params, arch.hvm.hvm_io), I am afraid.
> Or I missed something?

arch.hvm.params, iirc, is an x86 concept, and hence would need
abstracting away anyway. I expect this will be common pattern:
Either you want things to become generic (structure fields
living directly in struct domain, or at least not under arch.hvm),
or things need abstracting for per-arch handling.

>> This goes
>> alongside my suggestion to drop the "hvm" prefixes and infixes
>> from involved function names.
> Well, I assume this request as well as the request above should be 
> addressed in the follow-up patches, as we want to keep the code movement 
> in current patch as (almost) verbatim copy,
> Am I correct?

The renaming could imo be done before or after the move, but within
a single series. Doing it (or some of it) during the move may be
acceptable, but this primarily depends on the overall effect on the
patch that this would have. I.e. the patch better wouldn't become
gigantic just because all the renaming gets done in one go, and it's
hundreds of places that need touching.

Jan



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-22 Thread Oleksandr



On 22.09.20 09:33, Jan Beulich wrote:

Hi Jan


On 21.09.2020 21:02, Oleksandr wrote:

On 14.09.20 17:17, Jan Beulich wrote:

On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:

--- /dev/null
+++ b/xen/include/xen/ioreq.h
@@ -0,0 +1,82 @@
+/*
+ * ioreq.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#ifndef __IOREQ_H__
+#define __IOREQ_H__

__XEN_IOREQ_H__ please.

ok



+#include 
+
+#include 

Is this include really needed here (i.e. by the code further down in
the file, and hence by everyone including this header), or rather
just in a few specific .c files?

I think, just in few specific .c files. Which are x86/hvm/ioreq.c and
common/ioreq.c now and several other files later on (x86/hvm/dm.c,
arm/io.c, etc)
Shall I include that header in these files instead?

Yes please, and please take this as a common guideline. While
private headers are often used to include things needed by all
of the (typically few) users of the header, non-private ones
shouldn't create unnecessary dependencies on other headers. As
you've said further up - you did run into hard to resolve
header dependencies yourself, and the practice of including
headers without strict need is one of the reasons of such
problems.


Got it.





+#define GET_IOREQ_SERVER(d, id) \
+(d)->arch.hvm.ioreq_server.server[id]

arch.hvm.* feels like a layering violation when used in this header.

Got it. The only reason why GET_IOREQ_SERVER is here is inline
get_ioreq_server(). I will make it non-inline and move both to
common/ioreq.c.

Which won't make the layering violation go away. It's still
common rather than per-arch code then. As suggested elsewhere,
I think the whole ioreq_server struct wants to move into
struct domain itself, perhaps inside a new #ifdef (iirc one of
the patches introduces a suitable Kconfig option).
Well, your advise regarding ioreq_server sounds reasonable, but the 
common ioreq.c
still will have other *arch.hvm.* for both vcpu and domain. So looks 
like other involved structs should be moved
into *common* struct domain/vcpu itself, correct? Some of them could be 
moved easily since contain the same fields (arch.hvm.ioreq_gfn),
but some of them couldn't and seems to require to pull a lot of changes 
to the Xen code (arch.hvm.params, arch.hvm.hvm_io), I am afraid.

Or I missed something?



This goes
alongside my suggestion to drop the "hvm" prefixes and infixes
from involved function names.
Well, I assume this request as well as the request above should be 
addressed in the follow-up patches, as we want to keep the code movement 
in current patch as (almost) verbatim copy,

Am I correct?

--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-22 Thread Jan Beulich
On 21.09.2020 21:02, Oleksandr wrote:
> On 14.09.20 17:17, Jan Beulich wrote:
>> On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:
>>> --- /dev/null
>>> +++ b/xen/include/xen/ioreq.h
>>> @@ -0,0 +1,82 @@
>>> +/*
>>> + * ioreq.h: Hardware virtual machine assist interface definitions.
>>> + *
>>> + * Copyright (c) 2016 Citrix Systems Inc.
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify it
>>> + * under the terms and conditions of the GNU General Public License,
>>> + * version 2, as published by the Free Software Foundation.
>>> + *
>>> + * This program is distributed in the hope it will be useful, but WITHOUT
>>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License 
>>> for
>>> + * more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License along 
>>> with
>>> + * this program; If not, see .
>>> + */
>>> +
>>> +#ifndef __IOREQ_H__
>>> +#define __IOREQ_H__
>> __XEN_IOREQ_H__ please.
> 
> ok
> 
> 
>>
>>> +#include 
>>> +
>>> +#include 
>> Is this include really needed here (i.e. by the code further down in
>> the file, and hence by everyone including this header), or rather
>> just in a few specific .c files?
> I think, just in few specific .c files. Which are x86/hvm/ioreq.c and 
> common/ioreq.c now and several other files later on (x86/hvm/dm.c, 
> arm/io.c, etc)
> Shall I include that header in these files instead?

Yes please, and please take this as a common guideline. While
private headers are often used to include things needed by all
of the (typically few) users of the header, non-private ones
shouldn't create unnecessary dependencies on other headers. As
you've said further up - you did run into hard to resolve
header dependencies yourself, and the practice of including
headers without strict need is one of the reasons of such
problems.

>>> +#define GET_IOREQ_SERVER(d, id) \
>>> +(d)->arch.hvm.ioreq_server.server[id]
>> arch.hvm.* feels like a layering violation when used in this header.
> Got it. The only reason why GET_IOREQ_SERVER is here is inline 
> get_ioreq_server(). I will make it non-inline and move both to 
> common/ioreq.c.

Which won't make the layering violation go away. It's still
common rather than per-arch code then. As suggested elsewhere,
I think the whole ioreq_server struct wants to move into
struct domain itself, perhaps inside a new #ifdef (iirc one of
the patches introduces a suitable Kconfig option). This goes
alongside my suggestion to drop the "hvm" prefixes and infixes
from involved function names.

Jan



Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-21 Thread Oleksandr



On 14.09.20 17:17, Jan Beulich wrote:

Hi Jan


On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:

---
  MAINTAINERS |8 +-
  xen/arch/x86/Kconfig|1 +
  xen/arch/x86/hvm/dm.c   |2 +-
  xen/arch/x86/hvm/emulate.c  |2 +-
  xen/arch/x86/hvm/hvm.c  |2 +-
  xen/arch/x86/hvm/io.c   |2 +-
  xen/arch/x86/hvm/ioreq.c| 1425 +--
  xen/arch/x86/hvm/stdvga.c   |2 +-
  xen/arch/x86/hvm/vmx/vvmx.c |3 +-
  xen/arch/x86/mm.c   |2 +-
  xen/arch/x86/mm/shadow/common.c |2 +-
  xen/common/Kconfig  |3 +
  xen/common/Makefile |1 +
  xen/common/ioreq.c  | 1410 ++

This suggests it was almost the entire file which got moved. It would
be really nice if you could convince git to show the diff here, rather
than removal and addition of 1400 lines.

Additionally I wonder whether what's left in the original file wouldn't
better become inline functions now. If this was done in the previous
patch, the change would be truly a rename then, I think.
Last time when trying to make something inline in arch files for the RFC 
series (I don't remember exactly for what it was)
I got completely stuck with the build issues due to the header 
(inter-?)dependencies, which I failed to resolve).

Anyway I got your point and will try to experiment with that.





--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -19,41 +19,12 @@
  #ifndef __ASM_X86_HVM_IOREQ_H__
  #define __ASM_X86_HVM_IOREQ_H__
  
-bool hvm_io_pending(struct vcpu *v);

-bool handle_hvm_io_completion(struct vcpu *v);
-bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
+#include 
+#include 
+#include 

Are all three really needed here? Especially the last one strikes me as
odd.


We can leave only #include  here and move #include 
 to x86/hvm/ioreq.c.

Also #include  could be dropped.





--- /dev/null
+++ b/xen/include/xen/ioreq.h
@@ -0,0 +1,82 @@
+/*
+ * ioreq.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see .
+ */
+
+#ifndef __IOREQ_H__
+#define __IOREQ_H__

__XEN_IOREQ_H__ please.


ok





+#include 
+
+#include 

Is this include really needed here (i.e. by the code further down in
the file, and hence by everyone including this header), or rather
just in a few specific .c files?
I think, just in few specific .c files. Which are x86/hvm/ioreq.c and 
common/ioreq.c now and several other files later on (x86/hvm/dm.c, 
arm/io.c, etc)

Shall I include that header in these files instead?





+#define GET_IOREQ_SERVER(d, id) \
+(d)->arch.hvm.ioreq_server.server[id]

arch.hvm.* feels like a layering violation when used in this header.
Got it. The only reason why GET_IOREQ_SERVER is here is inline 
get_ioreq_server(). I will make it non-inline and move both to 
common/ioreq.c.
I assume this layering violation issue applies for 
hvm_domain_has_ioreq_server() introduced in

[PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()


--
Regards,

Oleksandr Tyshchenko




Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common

2020-09-14 Thread Jan Beulich
On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:
> ---
>  MAINTAINERS |8 +-
>  xen/arch/x86/Kconfig|1 +
>  xen/arch/x86/hvm/dm.c   |2 +-
>  xen/arch/x86/hvm/emulate.c  |2 +-
>  xen/arch/x86/hvm/hvm.c  |2 +-
>  xen/arch/x86/hvm/io.c   |2 +-
>  xen/arch/x86/hvm/ioreq.c| 1425 
> +--
>  xen/arch/x86/hvm/stdvga.c   |2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c |3 +-
>  xen/arch/x86/mm.c   |2 +-
>  xen/arch/x86/mm/shadow/common.c |2 +-
>  xen/common/Kconfig  |3 +
>  xen/common/Makefile |1 +
>  xen/common/ioreq.c  | 1410 ++

This suggests it was almost the entire file which got moved. It would
be really nice if you could convince git to show the diff here, rather
than removal and addition of 1400 lines.

Additionally I wonder whether what's left in the original file wouldn't
better become inline functions now. If this was done in the previous
patch, the change would be truly a rename then, I think.

> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -19,41 +19,12 @@
>  #ifndef __ASM_X86_HVM_IOREQ_H__
>  #define __ASM_X86_HVM_IOREQ_H__
>  
> -bool hvm_io_pending(struct vcpu *v);
> -bool handle_hvm_io_completion(struct vcpu *v);
> -bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
> +#include 
> +#include 
> +#include 

Are all three really needed here? Especially the last one strikes me as
odd.

> --- /dev/null
> +++ b/xen/include/xen/ioreq.h
> @@ -0,0 +1,82 @@
> +/*
> + * ioreq.h: Hardware virtual machine assist interface definitions.
> + *
> + * Copyright (c) 2016 Citrix Systems Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along 
> with
> + * this program; If not, see .
> + */
> +
> +#ifndef __IOREQ_H__
> +#define __IOREQ_H__

__XEN_IOREQ_H__ please.

> +#include 
> +
> +#include 

Is this include really needed here (i.e. by the code further down in
the file, and hence by everyone including this header), or rather
just in a few specific .c files?

> +#define GET_IOREQ_SERVER(d, id) \
> +(d)->arch.hvm.ioreq_server.server[id]

arch.hvm.* feels like a layering violation when used in this header.

Jan