Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Chao Gao
On Mon, Mar 20, 2017 at 06:50:37AM -0600, Jan Beulich wrote:
 On 20.03.17 at 06:22,  wrote:
>> On Mon, Mar 20, 2017 at 04:26:10AM -0600, Jan Beulich wrote:
>> On 20.03.17 at 03:38,  wrote:
 On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
  spin_unlock(&d->event_lock);
  if ( dest_vcpu_id >= 0 )
  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>>
>>>... right after the lock release.
>> 
>> @via_pi is also consumed during vCPU migration.
>
>But the event lock isn't being held around the checking of the
>field, so putting the setting of the field under lock is of no use.
>
>> I just think the event_lock protects R/W operations on struct hvm_pirq_dpci.
>> To prohibit potential race(we can't use VT-d PI in 1st binding, but we can 
>> use
>> in the 2nd binding. But somehow the first update to via_pi overrides the 
>> second one),
>> and don't complicate the fields event_lock protects,
>> I'm inclined to put it in event_lock-ed region as long as it doesn't 
>> introduce other issues.
>
>I certainly don't object to properly synchronizing things here,
>but then both producing and consuming side need to hold
>respective locks. Otherwise the best you can hope for is to
>reduce timing windows; you won't eliminate them though.

You are right. I am convinced.
Thanks
Chao

>
>Jan
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Jan Beulich
>>> On 20.03.17 at 06:22,  wrote:
> On Mon, Mar 20, 2017 at 04:26:10AM -0600, Jan Beulich wrote:
> On 20.03.17 at 03:38,  wrote:
>>> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>>> On 20.03.17 at 02:59,  wrote:
> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
> On 15.03.17 at 06:11,  wrote:
>>> +if ( iommu_intpost )
>>> +{
>>> +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>> + pirq_dpci->gmsi.gvec);
>>
>>This is now outside of the event_lock-ed region - is this safe?
> 
> do you mean it is __inside__ the event_lock-ed region?

Oops, indeed.

> I think it is safe
> for the functions called by pi_find_dest_vcpu() are almost same with
> hvm_girq_dest_2_vcpu_id.

The question then needs to be put differently: Is this needed?
You shouldn't move into a locked region what doesn't need to
be there.
>>> 
>>> Yes. For we rely on the result to set @via_pi which needs to be 
>>> protected by the lock.
>>
>>I don't follow: You set via_pi for hvm_migrate_pirqs() to consume,
>>yet the call to that function sits ...
>>
>>> +}
>>>  spin_unlock(&d->event_lock);
>>>  if ( dest_vcpu_id >= 0 )
>>>  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>
>>... right after the lock release.
> 
> @via_pi is also consumed during vCPU migration.

But the event lock isn't being held around the checking of the
field, so putting the setting of the field under lock is of no use.

> I just think the event_lock protects R/W operations on struct hvm_pirq_dpci.
> To prohibit potential race(we can't use VT-d PI in 1st binding, but we can use
> in the 2nd binding. But somehow the first update to via_pi overrides the 
> second one),
> and don't complicate the fields event_lock protects,
> I'm inclined to put it in event_lock-ed region as long as it doesn't 
> introduce other issues.

I certainly don't object to properly synchronizing things here,
but then both producing and consuming side need to hold
respective locks. Otherwise the best you can hope for is to
reduce timing windows; you won't eliminate them though.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Chao Gao
On Mon, Mar 20, 2017 at 04:26:10AM -0600, Jan Beulich wrote:
 On 20.03.17 at 03:38,  wrote:
>> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
>> On 20.03.17 at 02:59,  wrote:
 On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
 On 15.03.17 at 06:11,  wrote:
>> +if ( iommu_intpost )
>> +{
>> +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>> + pirq_dpci->gmsi.gvec);
>
>This is now outside of the event_lock-ed region - is this safe?
 
 do you mean it is __inside__ the event_lock-ed region?
>>>
>>>Oops, indeed.
>>>
 I think it is safe
 for the functions called by pi_find_dest_vcpu() are almost same with
 hvm_girq_dest_2_vcpu_id.
>>>
>>>The question then needs to be put differently: Is this needed?
>>>You shouldn't move into a locked region what doesn't need to
>>>be there.
>> 
>> Yes. For we rely on the result to set @via_pi which needs to be 
>> protected by the lock.
>
>I don't follow: You set via_pi for hvm_migrate_pirqs() to consume,
>yet the call to that function sits ...
>
>> +}
>>  spin_unlock(&d->event_lock);
>>  if ( dest_vcpu_id >= 0 )
>>  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>
>... right after the lock release.

@via_pi is also consumed during vCPU migration.
I just think the event_lock protects R/W operations on struct hvm_pirq_dpci.
To prohibit potential race(we can't use VT-d PI in 1st binding, but we can use
in the 2nd binding. But somehow the first update to via_pi overrides the second 
one),
and don't complicate the fields event_lock protects,
I'm inclined to put it in event_lock-ed region as long as it doesn't introduce 
other issues.

>
>(continuing from above) This could then use vcpu too.
 
 I don't understand. In this patch, vcpu is always null when VT-d PI is not
 enabled. Do you mean something like below: 
 
 if ( dest_vcpu_id >= 0 )
 vcpu = d->vcpu[dest_vcpu_id];
 if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
 {
 vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
 ...
 }
 spin_unlock(&d->event_lock);
 if ( vcpu )
 hvm_migrate_pirqs(vcpu);
>>>
>>>Yes, along these lines, albeit I think the first if() is more complicated
>>>than it needs to be.
>> 
>> We can make it simple like this:
>> 
>> const struct *vcpu vcpu;
>> ...
>> 
>> vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
>
>Ouch - there were three if()s, and I missed the first one, i.e. I
>really meant the middle of them.

Yes. the code may be wrong, other than complicated. The code above has changing
the way we choose the destination vcpu when VT-d PI is enabled. Even we
can get a single destination vcpu for LowestPrio, we should use
vector_hashing_dest() to calculate the destination vcpu like the original logic.

I think the code below is better:
if ( iommu_intpost && (delivery_mode == dest_LowestPrio) )

>
>Jan
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Jan Beulich
>>> On 20.03.17 at 03:38,  wrote:
> On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
> On 20.03.17 at 02:59,  wrote:
>>> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>>> On 15.03.17 at 06:11,  wrote:
> +if ( iommu_intpost )
> +{
> +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
> + pirq_dpci->gmsi.gvec);

This is now outside of the event_lock-ed region - is this safe?
>>> 
>>> do you mean it is __inside__ the event_lock-ed region?
>>
>>Oops, indeed.
>>
>>> I think it is safe
>>> for the functions called by pi_find_dest_vcpu() are almost same with
>>> hvm_girq_dest_2_vcpu_id.
>>
>>The question then needs to be put differently: Is this needed?
>>You shouldn't move into a locked region what doesn't need to
>>be there.
> 
> Yes. For we rely on the result to set @via_pi which needs to be 
> protected by the lock.

I don't follow: You set via_pi for hvm_migrate_pirqs() to consume,
yet the call to that function sits ...

> +}
>  spin_unlock(&d->event_lock);
>  if ( dest_vcpu_id >= 0 )
>  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);

... right after the lock release.

(continuing from above) This could then use vcpu too.
>>> 
>>> I don't understand. In this patch, vcpu is always null when VT-d PI is not
>>> enabled. Do you mean something like below: 
>>> 
>>> if ( dest_vcpu_id >= 0 )
>>> vcpu = d->vcpu[dest_vcpu_id];
>>> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
>>> {
>>> vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
>>> ...
>>> }
>>> spin_unlock(&d->event_lock);
>>> if ( vcpu )
>>> hvm_migrate_pirqs(vcpu);
>>
>>Yes, along these lines, albeit I think the first if() is more complicated
>>than it needs to be.
> 
> We can make it simple like this:
> 
> const struct *vcpu vcpu;
> ...
> 
> vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;

Ouch - there were three if()s, and I missed the first one, i.e. I
really meant the middle of them.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Chao Gao
On Mon, Mar 20, 2017 at 03:18:18AM -0600, Jan Beulich wrote:
 On 20.03.17 at 02:59,  wrote:
>> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
>> On 15.03.17 at 06:11,  wrote:
 +if ( iommu_intpost )
 +{
 +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
 + pirq_dpci->gmsi.gvec);
>>>
>>>This is now outside of the event_lock-ed region - is this safe?
>> 
>> do you mean it is __inside__ the event_lock-ed region?
>
>Oops, indeed.
>
>> I think it is safe
>> for the functions called by pi_find_dest_vcpu() are almost same with
>> hvm_girq_dest_2_vcpu_id.
>
>The question then needs to be put differently: Is this needed?
>You shouldn't move into a locked region what doesn't need to
>be there.

Yes. For we rely on the result to set @via_pi which needs to be 
protected by the lock.

>
 +}
  spin_unlock(&d->event_lock);
  if ( dest_vcpu_id >= 0 )
  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>>
>>>(continuing from above) This could then use vcpu too.
>> 
>> I don't understand. In this patch, vcpu is always null when VT-d PI is not
>> enabled. Do you mean something like below: 
>> 
>> if ( dest_vcpu_id >= 0 )
>> vcpu = d->vcpu[dest_vcpu_id];
>> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
>> {
>> vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
>> ...
>> }
>> spin_unlock(&d->event_lock);
>> if ( vcpu )
>> hvm_migrate_pirqs(vcpu);
>
>Yes, along these lines, albeit I think the first if() is more complicated
>than it needs to be.

We can make it simple like this:

const struct *vcpu vcpu;
...

vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;

Thanks
Chao

>
>Jan
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Jan Beulich
>>> On 20.03.17 at 02:59,  wrote:
> On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
> On 15.03.17 at 06:11,  wrote:
>>> +if ( iommu_intpost )
>>> +{
>>> +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>>> + pirq_dpci->gmsi.gvec);
>>
>>This is now outside of the event_lock-ed region - is this safe?
> 
> do you mean it is __inside__ the event_lock-ed region?

Oops, indeed.

> I think it is safe
> for the functions called by pi_find_dest_vcpu() are almost same with
> hvm_girq_dest_2_vcpu_id.

The question then needs to be put differently: Is this needed?
You shouldn't move into a locked region what doesn't need to
be there.

>>> +}
>>>  spin_unlock(&d->event_lock);
>>>  if ( dest_vcpu_id >= 0 )
>>>  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>>
>>(continuing from above) This could then use vcpu too.
> 
> I don't understand. In this patch, vcpu is always null when VT-d PI is not
> enabled. Do you mean something like below: 
> 
> if ( dest_vcpu_id >= 0 )
> vcpu = d->vcpu[dest_vcpu_id];
> if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
> {
> vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
> ...
> }
> spin_unlock(&d->event_lock);
> if ( vcpu )
> hvm_migrate_pirqs(vcpu);

Yes, along these lines, albeit I think the first if() is more complicated
than it needs to be.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-20 Thread Chao Gao
On Fri, Mar 17, 2017 at 04:43:08AM -0600, Jan Beulich wrote:
 On 15.03.17 at 06:11,  wrote:
>> @@ -441,6 +442,15 @@ int pt_irq_create_bind(
>>  
>>  dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
>>  pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
>> +
>> +pirq_dpci->gmsi.via_pi = 0;
>
>false
>
>> +if ( iommu_intpost )
>> +{
>> +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
>> + pirq_dpci->gmsi.gvec);
>
>This is now outside of the event_lock-ed region - is this safe?

do you mean it is __inside__ the event_lock-ed region? I think it is safe
for the functions called by pi_find_dest_vcpu() are almost same with
hvm_girq_dest_2_vcpu_id.

>
>Furthermore, looking at this, why do we need to call this function
>at all in other than the LowestPrio case? It looks as if we could
>easily use what hvm_girq_dest_2_vcpu_id() provides in the Fixed
>case.

Agree. we can delete pi_find_dest_vcpu() and for LowestPrio case 
when VT-d PI enabled, we call vector_hashing_dest() directly.

>
>> +if ( vcpu )
>> +pirq_dpci->gmsi.via_pi = 1;
>
>true
>
>> +}
>>  spin_unlock(&d->event_lock);
>>  if ( dest_vcpu_id >= 0 )
>>  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
>
>(continuing from above) This could then use vcpu too.

I don't understand. In this patch, vcpu is always null when VT-d PI is not
enabled. Do you mean something like below: 

if ( dest_vcpu_id >= 0 )
vcpu = d->vcpu[dest_vcpu_id];
if ( iommu_intpost && (!vcpu) && (delivery_mode == dest_LowestPrio) )
{
vcpu = vector_hashing_dest(d, dest, dest_mode,pirq_dpci->gmsi.gvec);
...
}
spin_unlock(&d->event_lock);
if ( vcpu )
hvm_migrate_pirqs(vcpu);

Thank,
Chao

>
>> --- a/xen/include/xen/hvm/irq.h
>> +++ b/xen/include/xen/hvm/irq.h
>> @@ -63,6 +63,7 @@ struct hvm_gmsi_info {
>>  uint32_t gvec;
>>  uint32_t gflags;
>>  int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
>> +bool via_pi; /* directly deliver to guest via VT-d PI? */
>
>Wouldn't the field name perhaps better be (or include) "posted"?
>
>Jan
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 5/6] passthrough/io: don't migrate pirq when it is delivered through VT-d PI

2017-03-17 Thread Jan Beulich
>>> On 15.03.17 at 06:11,  wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -445,6 +445,9 @@ static int hvm_migrate_pirq(struct domain *d, struct 
> hvm_pirq_dpci *pirq_dpci,
>  struct vcpu *v = arg;
>  
>  if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
> + (pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI) &&
> + /* Needn't migrate pirq if this pirq is delivered to guest 
> directly.*/
> + (pirq_dpci->gmsi.via_pi != 1) &&

Considering the field is bool, !pirq_dpci->gmsi.via_pi please.

> @@ -441,6 +442,15 @@ int pt_irq_create_bind(
>  
>  dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
>  pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
> +
> +pirq_dpci->gmsi.via_pi = 0;

false

> +if ( iommu_intpost )
> +{
> +vcpu = pi_find_dest_vcpu(d, dest, dest_mode, delivery_mode,
> + pirq_dpci->gmsi.gvec);

This is now outside of the event_lock-ed region - is this safe?

Furthermore, looking at this, why do we need to call this function
at all in other than the LowestPrio case? It looks as if we could
easily use what hvm_girq_dest_2_vcpu_id() provides in the Fixed
case.

> +if ( vcpu )
> +pirq_dpci->gmsi.via_pi = 1;

true

> +}
>  spin_unlock(&d->event_lock);
>  if ( dest_vcpu_id >= 0 )
>  hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);

(continuing from above) This could then use vcpu too.

> --- a/xen/include/xen/hvm/irq.h
> +++ b/xen/include/xen/hvm/irq.h
> @@ -63,6 +63,7 @@ struct hvm_gmsi_info {
>  uint32_t gvec;
>  uint32_t gflags;
>  int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
> +bool via_pi; /* directly deliver to guest via VT-d PI? */

Wouldn't the field name perhaps better be (or include) "posted"?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel