Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-26 Thread Andre Przywara
Hi,

On 26/05/17 11:19, Julien Grall wrote:
> Hi Stefano,
> 
> On 25/05/17 22:05, Stefano Stabellini wrote:
>> On Thu, 25 May 2017, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 25/05/2017 19:49, Stefano Stabellini wrote:
 On Thu, 25 May 2017, Andre Przywara wrote:
> Hi,
>
> On 23/05/17 18:47, Stefano Stabellini wrote:
>> On Tue, 23 May 2017, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 22/05/17 23:19, Stefano Stabellini wrote:
 On Tue, 16 May 2017, Julien Grall wrote:
>> @@ -436,8 +473,26 @@ static int
>> __vgic_v3_rdistr_rd_mmio_write(struct
>> vcpu
>> *v, mmio_info_t *info,
>>  switch ( gicr_reg )
>>  {
>>  case VREG32(GICR_CTLR):
>> -/* LPI's not implemented */
>> -goto write_ignore_32;
>> +{
>> +unsigned long flags;
>> +
>> +if ( !v->domain->arch.vgic.has_its )
>> +goto write_ignore_32;
>> +if ( dabt.size != DABT_WORD ) goto bad_width;
>> +
>> +vgic_lock(v);   /* protects
>> rdists_enabled */
>
> Getting back to the locking. I don't see any place where we get
> the domain
> vgic lock before vCPU vgic lock. So this raises the question why
> this
> ordering
> and not moving this lock into vgic_vcpu_enable_lpis.
>
> At least this require documentation in the code and explanation in
> the
> commit
> message.

 It doesn't look like we need to take the v->arch.vgic.lock here.
 What is
 it protecting?
>>>
>>> The name of the function is a bit confusion. It does not take the
>>> vCPU
>>> vgic
>>> lock but the domain vgic lock.
>>>
>>> I believe the vcpu is passed to avoid have v->domain in most of the
>>> callers.
>>> But we should probably rename the function.
>>>
>>> In this case it protects vgic_vcpu_enable_lpis because you can
>>> configure the
>>> number of LPIs per re-distributor but this is a domain wide value. I
>>> know the
>>> spec is confusing on this.
>>
>> The quoting here is very unhelpful. In Andre's patch:
>>
>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>> vcpu *v, mmio_info_t *info,
>>  switch ( gicr_reg )
>>  {
>>  case VREG32(GICR_CTLR):
>> -/* LPI's not implemented */
>> -goto write_ignore_32;
>> +{
>> +unsigned long flags;
>> +
>> +if ( !v->domain->arch.vgic.has_its )
>> +goto write_ignore_32;
>> +if ( dabt.size != DABT_WORD ) goto bad_width;
>> +
>> +vgic_lock(v);   /* protects
>> rdists_enabled */
>> +spin_lock_irqsave(>arch.vgic.lock, flags);
>> +
>> +/* LPIs can only be enabled once, but never disabled
>> again. */
>> +if ( (r & GICR_CTLR_ENABLE_LPIS) &&
>> + !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
>> +vgic_vcpu_enable_lpis(v);
>> +
>> +spin_unlock_irqrestore(>arch.vgic.lock, flags);
>> +vgic_unlock(v);
>> +
>> +return 1;
>> +}
>>
>> My question is: do we need to take both vgic_lock and
>> v->arch.vgic.lock?
>
> The domain lock (taken by vgic_lock()) protects rdists_enabled. This
> variable stores whether at least one redistributor has LPIs
> enabled. In
> this case the property table gets into use and since the table is
> shared
> across all redistributors, we must not change it anymore, even on
> another redistributor which has its LPIs still disabled.
> So while this looks like this is a per-redistributor (=per-VCPU)
> property, it is actually per domain, hence this lock.
> The VGIC VCPU lock is then used to naturally protect the enable bit
> against multiple VCPUs accessing this register simultaneously - the
> redists are MMIO mapped, but not banked, so this is possible.
>
> Does that make sense?

 If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
 couldn't we just read/write the bit atomically? It's just a bit after
 all, it doesn't need a lock.
>>>
>>> The vGIC vCPU lock is also here to serialize access to the
>>> re-distributor
>>> state when necessary.
>>>
>>> For instance you don't want to allow write in PENDBASER after LPIs
>>> have been
>>> enabled.
>>>
>>> If you don't take the lock here, you would have a small race where
>>> PENDBASER
>>> might be written whilst the LPIs are getting enabled.
>>>
>>> The code in PENDBASER today does not strictly require the locking,
>>> but I think
>>> we should keep the lock around. Moving to the atomic will not really
>>> benefit
>>> here as write to 

Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-26 Thread Julien Grall

Hi Stefano,

On 25/05/17 22:05, Stefano Stabellini wrote:

On Thu, 25 May 2017, Julien Grall wrote:

Hi Stefano,

On 25/05/2017 19:49, Stefano Stabellini wrote:

On Thu, 25 May 2017, Andre Przywara wrote:

Hi,

On 23/05/17 18:47, Stefano Stabellini wrote:

On Tue, 23 May 2017, Julien Grall wrote:

Hi Stefano,

On 22/05/17 23:19, Stefano Stabellini wrote:

On Tue, 16 May 2017, Julien Grall wrote:

@@ -436,8 +473,26 @@ static int
__vgic_v3_rdistr_rd_mmio_write(struct
vcpu
*v, mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects
rdists_enabled */


Getting back to the locking. I don't see any place where we get
the domain
vgic lock before vCPU vgic lock. So this raises the question why
this
ordering
and not moving this lock into vgic_vcpu_enable_lpis.

At least this require documentation in the code and explanation in
the
commit
message.


It doesn't look like we need to take the v->arch.vgic.lock here.
What is
it protecting?


The name of the function is a bit confusion. It does not take the vCPU
vgic
lock but the domain vgic lock.

I believe the vcpu is passed to avoid have v->domain in most of the
callers.
But we should probably rename the function.

In this case it protects vgic_vcpu_enable_lpis because you can
configure the
number of LPIs per re-distributor but this is a domain wide value. I
know the
spec is confusing on this.


The quoting here is very unhelpful. In Andre's patch:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
vcpu *v, mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+vgic_unlock(v);
+
+return 1;
+}

My question is: do we need to take both vgic_lock and v->arch.vgic.lock?


The domain lock (taken by vgic_lock()) protects rdists_enabled. This
variable stores whether at least one redistributor has LPIs enabled. In
this case the property table gets into use and since the table is shared
across all redistributors, we must not change it anymore, even on
another redistributor which has its LPIs still disabled.
So while this looks like this is a per-redistributor (=per-VCPU)
property, it is actually per domain, hence this lock.
The VGIC VCPU lock is then used to naturally protect the enable bit
against multiple VCPUs accessing this register simultaneously - the
redists are MMIO mapped, but not banked, so this is possible.

Does that make sense?


If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
couldn't we just read/write the bit atomically? It's just a bit after
all, it doesn't need a lock.


The vGIC vCPU lock is also here to serialize access to the re-distributor
state when necessary.

For instance you don't want to allow write in PENDBASER after LPIs have been
enabled.

If you don't take the lock here, you would have a small race where PENDBASER
might be written whilst the LPIs are getting enabled.

The code in PENDBASER today does not strictly require the locking, but I think
we should keep the lock around. Moving to the atomic will not really benefit
here as write to those registers will be very unlikely so we don't need very
good performance.


I suggested the atomic as a way to replace the lock, to reduce the
number of lock order dependencies, rather than for performance (who
cares about performance for this case). If all accesses to
VGIC_V3_LPIS_ENABLED are atomic, then we wouldn't need the lock.

Another maybe simpler way to keep the vgic vcpu lock but avoid
introducing the vgic domain lock -> vgic vcpu lock dependency (the less
the better) would be to take the vgic vcpu lock first, release it, then
take the vgic domain lock and call vgic_vcpu_enable_lpis after.  In
pseudo-code:

vgic vcpu lock
read old value of VGIC_V3_LPIS_ENABLED
write new value of VGIC_V3_LPIS_ENABLED
vgic vcpu unlock

vgic domain lock
vgic_vcpu_enable_lpis (minus the setting of arch.vgic.flags)
vgic domain unlock

It doesn't look like we need to set VGIC_V3_LPIS_ENABLED within
vgic_vcpu_enable_lpis, so this seems to be working. What do 

Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-25 Thread Stefano Stabellini
On Thu, 25 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 25/05/2017 19:49, Stefano Stabellini wrote:
> > On Thu, 25 May 2017, Andre Przywara wrote:
> > > Hi,
> > > 
> > > On 23/05/17 18:47, Stefano Stabellini wrote:
> > > > On Tue, 23 May 2017, Julien Grall wrote:
> > > > > Hi Stefano,
> > > > > 
> > > > > On 22/05/17 23:19, Stefano Stabellini wrote:
> > > > > > On Tue, 16 May 2017, Julien Grall wrote:
> > > > > > > > @@ -436,8 +473,26 @@ static int
> > > > > > > > __vgic_v3_rdistr_rd_mmio_write(struct
> > > > > > > > vcpu
> > > > > > > > *v, mmio_info_t *info,
> > > > > > > >  switch ( gicr_reg )
> > > > > > > >  {
> > > > > > > >  case VREG32(GICR_CTLR):
> > > > > > > > -/* LPI's not implemented */
> > > > > > > > -goto write_ignore_32;
> > > > > > > > +{
> > > > > > > > +unsigned long flags;
> > > > > > > > +
> > > > > > > > +if ( !v->domain->arch.vgic.has_its )
> > > > > > > > +goto write_ignore_32;
> > > > > > > > +if ( dabt.size != DABT_WORD ) goto bad_width;
> > > > > > > > +
> > > > > > > > +vgic_lock(v);   /* protects
> > > > > > > > rdists_enabled */
> > > > > > > 
> > > > > > > Getting back to the locking. I don't see any place where we get
> > > > > > > the domain
> > > > > > > vgic lock before vCPU vgic lock. So this raises the question why
> > > > > > > this
> > > > > > > ordering
> > > > > > > and not moving this lock into vgic_vcpu_enable_lpis.
> > > > > > > 
> > > > > > > At least this require documentation in the code and explanation in
> > > > > > > the
> > > > > > > commit
> > > > > > > message.
> > > > > > 
> > > > > > It doesn't look like we need to take the v->arch.vgic.lock here.
> > > > > > What is
> > > > > > it protecting?
> > > > > 
> > > > > The name of the function is a bit confusion. It does not take the vCPU
> > > > > vgic
> > > > > lock but the domain vgic lock.
> > > > > 
> > > > > I believe the vcpu is passed to avoid have v->domain in most of the
> > > > > callers.
> > > > > But we should probably rename the function.
> > > > > 
> > > > > In this case it protects vgic_vcpu_enable_lpis because you can
> > > > > configure the
> > > > > number of LPIs per re-distributor but this is a domain wide value. I
> > > > > know the
> > > > > spec is confusing on this.
> > > > 
> > > > The quoting here is very unhelpful. In Andre's patch:
> > > > 
> > > > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> > > > vcpu *v, mmio_info_t *info,
> > > >  switch ( gicr_reg )
> > > >  {
> > > >  case VREG32(GICR_CTLR):
> > > > -/* LPI's not implemented */
> > > > -goto write_ignore_32;
> > > > +{
> > > > +unsigned long flags;
> > > > +
> > > > +if ( !v->domain->arch.vgic.has_its )
> > > > +goto write_ignore_32;
> > > > +if ( dabt.size != DABT_WORD ) goto bad_width;
> > > > +
> > > > +vgic_lock(v);   /* protects rdists_enabled */
> > > > +spin_lock_irqsave(>arch.vgic.lock, flags);
> > > > +
> > > > +/* LPIs can only be enabled once, but never disabled again. */
> > > > +if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> > > > + !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> > > > +vgic_vcpu_enable_lpis(v);
> > > > +
> > > > +spin_unlock_irqrestore(>arch.vgic.lock, flags);
> > > > +vgic_unlock(v);
> > > > +
> > > > +return 1;
> > > > +}
> > > > 
> > > > My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
> > > 
> > > The domain lock (taken by vgic_lock()) protects rdists_enabled. This
> > > variable stores whether at least one redistributor has LPIs enabled. In
> > > this case the property table gets into use and since the table is shared
> > > across all redistributors, we must not change it anymore, even on
> > > another redistributor which has its LPIs still disabled.
> > > So while this looks like this is a per-redistributor (=per-VCPU)
> > > property, it is actually per domain, hence this lock.
> > > The VGIC VCPU lock is then used to naturally protect the enable bit
> > > against multiple VCPUs accessing this register simultaneously - the
> > > redists are MMIO mapped, but not banked, so this is possible.
> > > 
> > > Does that make sense?
> > 
> > If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
> > couldn't we just read/write the bit atomically? It's just a bit after
> > all, it doesn't need a lock.
> 
> The vGIC vCPU lock is also here to serialize access to the re-distributor
> state when necessary.
> 
> For instance you don't want to allow write in PENDBASER after LPIs have been
> enabled.
> 
> If you don't take the lock here, you would have a small race where PENDBASER
> might be written whilst the LPIs are getting enabled.
> 
> The code in PENDBASER today does not strictly require the locking, but I think
> we should keep the lock around. Moving to the atomic will not 

Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-25 Thread Julien Grall

Hi Stefano,

On 25/05/2017 19:49, Stefano Stabellini wrote:

On Thu, 25 May 2017, Andre Przywara wrote:

Hi,

On 23/05/17 18:47, Stefano Stabellini wrote:

On Tue, 23 May 2017, Julien Grall wrote:

Hi Stefano,

On 22/05/17 23:19, Stefano Stabellini wrote:

On Tue, 16 May 2017, Julien Grall wrote:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
vcpu
*v, mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */


Getting back to the locking. I don't see any place where we get the domain
vgic lock before vCPU vgic lock. So this raises the question why this
ordering
and not moving this lock into vgic_vcpu_enable_lpis.

At least this require documentation in the code and explanation in the
commit
message.


It doesn't look like we need to take the v->arch.vgic.lock here. What is
it protecting?


The name of the function is a bit confusion. It does not take the vCPU vgic
lock but the domain vgic lock.

I believe the vcpu is passed to avoid have v->domain in most of the callers.
But we should probably rename the function.

In this case it protects vgic_vcpu_enable_lpis because you can configure the
number of LPIs per re-distributor but this is a domain wide value. I know the
spec is confusing on this.


The quoting here is very unhelpful. In Andre's patch:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+vgic_unlock(v);
+
+return 1;
+}

My question is: do we need to take both vgic_lock and v->arch.vgic.lock?


The domain lock (taken by vgic_lock()) protects rdists_enabled. This
variable stores whether at least one redistributor has LPIs enabled. In
this case the property table gets into use and since the table is shared
across all redistributors, we must not change it anymore, even on
another redistributor which has its LPIs still disabled.
So while this looks like this is a per-redistributor (=per-VCPU)
property, it is actually per domain, hence this lock.
The VGIC VCPU lock is then used to naturally protect the enable bit
against multiple VCPUs accessing this register simultaneously - the
redists are MMIO mapped, but not banked, so this is possible.

Does that make sense?


If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
couldn't we just read/write the bit atomically? It's just a bit after
all, it doesn't need a lock.


The vGIC vCPU lock is also here to serialize access to the 
re-distributor state when necessary.


For instance you don't want to allow write in PENDBASER after LPIs have 
been enabled.


If you don't take the lock here, you would have a small race where 
PENDBASER might be written whilst the LPIs are getting enabled.


The code in PENDBASER today does not strictly require the locking, but I 
think we should keep the lock around. Moving to the atomic will not 
really benefit here as write to those registers will be very unlikely so 
we don't need very good performance.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-25 Thread Stefano Stabellini
On Thu, 25 May 2017, Andre Przywara wrote:
> Hi,
> 
> On 23/05/17 18:47, Stefano Stabellini wrote:
> > On Tue, 23 May 2017, Julien Grall wrote:
> >> Hi Stefano,
> >>
> >> On 22/05/17 23:19, Stefano Stabellini wrote:
> >>> On Tue, 16 May 2017, Julien Grall wrote:
> > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> > vcpu
> > *v, mmio_info_t *info,
> >  switch ( gicr_reg )
> >  {
> >  case VREG32(GICR_CTLR):
> > -/* LPI's not implemented */
> > -goto write_ignore_32;
> > +{
> > +unsigned long flags;
> > +
> > +if ( !v->domain->arch.vgic.has_its )
> > +goto write_ignore_32;
> > +if ( dabt.size != DABT_WORD ) goto bad_width;
> > +
> > +vgic_lock(v);   /* protects rdists_enabled */
> 
>  Getting back to the locking. I don't see any place where we get the 
>  domain
>  vgic lock before vCPU vgic lock. So this raises the question why this
>  ordering
>  and not moving this lock into vgic_vcpu_enable_lpis.
> 
>  At least this require documentation in the code and explanation in the
>  commit
>  message.
> >>>
> >>> It doesn't look like we need to take the v->arch.vgic.lock here. What is
> >>> it protecting?
> >>
> >> The name of the function is a bit confusion. It does not take the vCPU vgic
> >> lock but the domain vgic lock.
> >>
> >> I believe the vcpu is passed to avoid have v->domain in most of the 
> >> callers.
> >> But we should probably rename the function.
> >>
> >> In this case it protects vgic_vcpu_enable_lpis because you can configure 
> >> the
> >> number of LPIs per re-distributor but this is a domain wide value. I know 
> >> the
> >> spec is confusing on this.
> > 
> > The quoting here is very unhelpful. In Andre's patch:
> > 
> > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu 
> > *v, mmio_info_t *info,
> >  switch ( gicr_reg )
> >  {
> >  case VREG32(GICR_CTLR):
> > -/* LPI's not implemented */
> > -goto write_ignore_32;
> > +{
> > +unsigned long flags;
> > +
> > +if ( !v->domain->arch.vgic.has_its )
> > +goto write_ignore_32;
> > +if ( dabt.size != DABT_WORD ) goto bad_width;
> > +
> > +vgic_lock(v);   /* protects rdists_enabled */
> > +spin_lock_irqsave(>arch.vgic.lock, flags);
> > +
> > +/* LPIs can only be enabled once, but never disabled again. */
> > +if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> > + !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> > +vgic_vcpu_enable_lpis(v);
> > +
> > +spin_unlock_irqrestore(>arch.vgic.lock, flags);
> > +vgic_unlock(v);
> > +
> > +return 1;
> > +}
> > 
> > My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
> 
> The domain lock (taken by vgic_lock()) protects rdists_enabled. This
> variable stores whether at least one redistributor has LPIs enabled. In
> this case the property table gets into use and since the table is shared
> across all redistributors, we must not change it anymore, even on
> another redistributor which has its LPIs still disabled.
> So while this looks like this is a per-redistributor (=per-VCPU)
> property, it is actually per domain, hence this lock.
> The VGIC VCPU lock is then used to naturally protect the enable bit
> against multiple VCPUs accessing this register simultaneously - the
> redists are MMIO mapped, but not banked, so this is possible.
> 
> Does that make sense?

If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
couldn't we just read/write the bit atomically? It's just a bit after
all, it doesn't need a lock.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-25 Thread Andre Przywara
Hi,

On 23/05/17 18:47, Stefano Stabellini wrote:
> On Tue, 23 May 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 22/05/17 23:19, Stefano Stabellini wrote:
>>> On Tue, 16 May 2017, Julien Grall wrote:
> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> vcpu
> *v, mmio_info_t *info,
>  switch ( gicr_reg )
>  {
>  case VREG32(GICR_CTLR):
> -/* LPI's not implemented */
> -goto write_ignore_32;
> +{
> +unsigned long flags;
> +
> +if ( !v->domain->arch.vgic.has_its )
> +goto write_ignore_32;
> +if ( dabt.size != DABT_WORD ) goto bad_width;
> +
> +vgic_lock(v);   /* protects rdists_enabled */

 Getting back to the locking. I don't see any place where we get the domain
 vgic lock before vCPU vgic lock. So this raises the question why this
 ordering
 and not moving this lock into vgic_vcpu_enable_lpis.

 At least this require documentation in the code and explanation in the
 commit
 message.
>>>
>>> It doesn't look like we need to take the v->arch.vgic.lock here. What is
>>> it protecting?
>>
>> The name of the function is a bit confusion. It does not take the vCPU vgic
>> lock but the domain vgic lock.
>>
>> I believe the vcpu is passed to avoid have v->domain in most of the callers.
>> But we should probably rename the function.
>>
>> In this case it protects vgic_vcpu_enable_lpis because you can configure the
>> number of LPIs per re-distributor but this is a domain wide value. I know the
>> spec is confusing on this.
> 
> The quoting here is very unhelpful. In Andre's patch:
> 
> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu 
> *v, mmio_info_t *info,
>  switch ( gicr_reg )
>  {
>  case VREG32(GICR_CTLR):
> -/* LPI's not implemented */
> -goto write_ignore_32;
> +{
> +unsigned long flags;
> +
> +if ( !v->domain->arch.vgic.has_its )
> +goto write_ignore_32;
> +if ( dabt.size != DABT_WORD ) goto bad_width;
> +
> +vgic_lock(v);   /* protects rdists_enabled */
> +spin_lock_irqsave(>arch.vgic.lock, flags);
> +
> +/* LPIs can only be enabled once, but never disabled again. */
> +if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> + !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> +vgic_vcpu_enable_lpis(v);
> +
> +spin_unlock_irqrestore(>arch.vgic.lock, flags);
> +vgic_unlock(v);
> +
> +return 1;
> +}
> 
> My question is: do we need to take both vgic_lock and v->arch.vgic.lock?

The domain lock (taken by vgic_lock()) protects rdists_enabled. This
variable stores whether at least one redistributor has LPIs enabled. In
this case the property table gets into use and since the table is shared
across all redistributors, we must not change it anymore, even on
another redistributor which has its LPIs still disabled.
So while this looks like this is a per-redistributor (=per-VCPU)
property, it is actually per domain, hence this lock.
The VGIC VCPU lock is then used to naturally protect the enable bit
against multiple VCPUs accessing this register simultaneously - the
redists are MMIO mapped, but not banked, so this is possible.

Does that make sense?

Cheers,
Andre

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-24 Thread Julien Grall

Hi Stefano,

On 05/23/2017 06:47 PM, Stefano Stabellini wrote:

On Tue, 23 May 2017, Julien Grall wrote:

Hi Stefano,

On 22/05/17 23:19, Stefano Stabellini wrote:

On Tue, 16 May 2017, Julien Grall wrote:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
vcpu
*v, mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */


Getting back to the locking. I don't see any place where we get the domain
vgic lock before vCPU vgic lock. So this raises the question why this
ordering
and not moving this lock into vgic_vcpu_enable_lpis.

At least this require documentation in the code and explanation in the
commit
message.


It doesn't look like we need to take the v->arch.vgic.lock here. What is
it protecting?


The name of the function is a bit confusion. It does not take the vCPU vgic
lock but the domain vgic lock.

I believe the vcpu is passed to avoid have v->domain in most of the callers.
But we should probably rename the function.

In this case it protects vgic_vcpu_enable_lpis because you can configure the
number of LPIs per re-distributor but this is a domain wide value. I know the
spec is confusing on this.


The quoting here is very unhelpful. In Andre's patch:


Oh, though my point about vgic_lock naming stands :).



@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+vgic_unlock(v);
+
+return 1;
+}

My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
If so, why?


I will let Andre confirm here.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-23 Thread Stefano Stabellini
On Tue, 23 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/05/17 23:19, Stefano Stabellini wrote:
> > On Tue, 16 May 2017, Julien Grall wrote:
> > > > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> > > > vcpu
> > > > *v, mmio_info_t *info,
> > > >  switch ( gicr_reg )
> > > >  {
> > > >  case VREG32(GICR_CTLR):
> > > > -/* LPI's not implemented */
> > > > -goto write_ignore_32;
> > > > +{
> > > > +unsigned long flags;
> > > > +
> > > > +if ( !v->domain->arch.vgic.has_its )
> > > > +goto write_ignore_32;
> > > > +if ( dabt.size != DABT_WORD ) goto bad_width;
> > > > +
> > > > +vgic_lock(v);   /* protects rdists_enabled */
> > > 
> > > Getting back to the locking. I don't see any place where we get the domain
> > > vgic lock before vCPU vgic lock. So this raises the question why this
> > > ordering
> > > and not moving this lock into vgic_vcpu_enable_lpis.
> > > 
> > > At least this require documentation in the code and explanation in the
> > > commit
> > > message.
> > 
> > It doesn't look like we need to take the v->arch.vgic.lock here. What is
> > it protecting?
> 
> The name of the function is a bit confusion. It does not take the vCPU vgic
> lock but the domain vgic lock.
> 
> I believe the vcpu is passed to avoid have v->domain in most of the callers.
> But we should probably rename the function.
> 
> In this case it protects vgic_vcpu_enable_lpis because you can configure the
> number of LPIs per re-distributor but this is a domain wide value. I know the
> spec is confusing on this.

The quoting here is very unhelpful. In Andre's patch:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+vgic_unlock(v);
+
+return 1;
+}

My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
If so, why?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-23 Thread Andre Przywara
Hi,

On 16/05/17 14:03, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> To let a guest know about the availability of virtual LPIs, set the
>> respective bits in the virtual GIC registers and let a guest control
>> the LPI enable bit.
>> Only report the LPI capability if the host has initialized at least
>> one ITS.
>> This removes a "TBD" comment, as we now populate the processor number
>> in the GICR_TYPE register.
> 
> s/GICR_TYPE/GICR_TYPER/
> 
> Also, I think it would be worth explaining that you populate
> GICR_TYPER.Process_Number because the ITS will use it later on.
> 
>> Advertise 24 bits worth of LPIs to the guest.
> 
> Again this is not valid anymore. You said you would drop it on the
> previous version. So why it has not been done?
> 
>>
>> Signed-off-by: Andre Przywara 
>> ---
>>  xen/arch/arm/vgic-v3.c | 70
>> ++
>>  1 file changed, 65 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
>> index 38c123c..6dbdb2e 100644
>> --- a/xen/arch/arm/vgic-v3.c
>> +++ b/xen/arch/arm/vgic-v3.c
>> @@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct
>> vcpu *v, mmio_info_t *info,
>>  switch ( gicr_reg )
>>  {
>>  case VREG32(GICR_CTLR):
>> -/* We have not implemented LPI's, read zero */
>> -goto read_as_zero_32;
>> +{
>> +unsigned long flags;
>> +
>> +if ( !v->domain->arch.vgic.has_its )
>> +goto read_as_zero_32;
>> +if ( dabt.size != DABT_WORD ) goto bad_width;
>> +
>> +spin_lock_irqsave(>arch.vgic.lock, flags);
>> +*r = vgic_reg32_extract(!!(v->arch.vgic.flags &
>> VGIC_V3_LPIS_ENABLED),
>> +info);
>> +spin_unlock_irqrestore(>arch.vgic.lock, flags);
>> +return 1;
>> +}
>>
>>  case VREG32(GICR_IIDR):
>>  if ( dabt.size != DABT_WORD ) goto bad_width;
>> @@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct
>> vcpu *v, mmio_info_t *info,
>>  uint64_t typer, aff;
>>
>>  if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
>> -/* TBD: Update processor id in [23:8] when ITS support is
>> added */
>>  aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
>> MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
>> MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
>> MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
>>  typer = aff;
>> +/* We use the VCPU ID as the redistributor ID in bits[23:8] */
>> +typer |= (v->vcpu_id & 0x) << 8;
> 
> Why the mask here? This sound like a bug to me if vcpu_id does not fit
> it and you would make it worst by the mask.
> 
> But this is already addressed by max_vcpus in the vgic_ops. So please
> drop the pointless mask.
> 
> Lastly, I would have expected to try to address my remark everywhere
> regarding hardcoding offset. In this case,

Fixed.

>>
>>  if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
>>  typer |= GICR_TYPER_LAST;
>>
>> +if ( v->domain->arch.vgic.has_its )
>> +typer |= GICR_TYPER_PLPIS;
>> +
>>  *r = vgic_reg64_extract(typer, info);
>>
>>  return 1;
>> @@ -426,6 +441,28 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
>>  return reg;
>>  }
>>
>> +static void vgic_vcpu_enable_lpis(struct vcpu *v)
>> +{
>> +uint64_t reg = v->domain->arch.vgic.rdist_propbase;
>> +unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
>> +
>> +/* rdists_enabled is protected by the domain lock. */
>> +ASSERT(spin_is_locked(>domain->arch.vgic.lock));
>> +
>> +if ( nr_lpis < LPI_OFFSET )
>> +nr_lpis = 0;
>> +else
>> +nr_lpis -= LPI_OFFSET;
>> +
>> +if ( !v->domain->arch.vgic.rdists_enabled )
>> +{
>> +v->domain->arch.vgic.nr_lpis = nr_lpis;
>> +v->domain->arch.vgic.rdists_enabled = true;
>> +}
>> +
>> +v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
>> +}
>> +
>>  static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t
>> *info,
>>uint32_t gicr_reg,
>>register_t r)
>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>> vcpu *v, mmio_info_t *info,
>>  switch ( gicr_reg )
>>  {
>>  case VREG32(GICR_CTLR):
>> -/* LPI's not implemented */
>> -goto write_ignore_32;
>> +{
>> +unsigned long flags;
>> +
>> +if ( !v->domain->arch.vgic.has_its )
>> +goto write_ignore_32;
>> +if ( dabt.size != DABT_WORD ) goto bad_width;
>> +
>> +vgic_lock(v);   /* protects rdists_enabled */
> 
> Getting back to the locking. I don't see any place where we get the
> domain vgic lock before vCPU vgic lock.

Because that seems to be the natural locking order, 

Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-23 Thread Julien Grall

Hi Stefano,

On 22/05/17 23:19, Stefano Stabellini wrote:

On Tue, 16 May 2017, Julien Grall wrote:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu
*v, mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */


Getting back to the locking. I don't see any place where we get the domain
vgic lock before vCPU vgic lock. So this raises the question why this ordering
and not moving this lock into vgic_vcpu_enable_lpis.

At least this require documentation in the code and explanation in the commit
message.


It doesn't look like we need to take the v->arch.vgic.lock here. What is
it protecting?


The name of the function is a bit confusion. It does not take the vCPU 
vgic lock but the domain vgic lock.


I believe the vcpu is passed to avoid have v->domain in most of the 
callers. But we should probably rename the function.


In this case it protects vgic_vcpu_enable_lpis because you can configure 
the number of LPIs per re-distributor but this is a domain wide value. I 
know the spec is confusing on this.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-22 Thread Stefano Stabellini
On Tue, 16 May 2017, Julien Grall wrote:
> > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu
> > *v, mmio_info_t *info,
> >  switch ( gicr_reg )
> >  {
> >  case VREG32(GICR_CTLR):
> > -/* LPI's not implemented */
> > -goto write_ignore_32;
> > +{
> > +unsigned long flags;
> > +
> > +if ( !v->domain->arch.vgic.has_its )
> > +goto write_ignore_32;
> > +if ( dabt.size != DABT_WORD ) goto bad_width;
> > +
> > +vgic_lock(v);   /* protects rdists_enabled */
> 
> Getting back to the locking. I don't see any place where we get the domain
> vgic lock before vCPU vgic lock. So this raises the question why this ordering
> and not moving this lock into vgic_vcpu_enable_lpis.
> 
> At least this require documentation in the code and explanation in the commit
> message.

It doesn't look like we need to take the v->arch.vgic.lock here. What is
it protecting?


> > +spin_lock_irqsave(>arch.vgic.lock, flags);
> > +
> > +/* LPIs can only be enabled once, but never disabled again. */
> > +if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> > + !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> > +vgic_vcpu_enable_lpis(v);
> > +
> > +spin_unlock_irqrestore(>arch.vgic.lock, flags);
> > +vgic_unlock(v);
> > +
> > +return 1;
> > +}
> > 
> >  case VREG32(GICR_IIDR):
> >  /* RO */

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-16 Thread Julien Grall

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:

To let a guest know about the availability of virtual LPIs, set the
respective bits in the virtual GIC registers and let a guest control
the LPI enable bit.
Only report the LPI capability if the host has initialized at least
one ITS.
This removes a "TBD" comment, as we now populate the processor number
in the GICR_TYPE register.


s/GICR_TYPE/GICR_TYPER/

Also, I think it would be worth explaining that you populate 
GICR_TYPER.Process_Number because the ITS will use it later on.



Advertise 24 bits worth of LPIs to the guest.


Again this is not valid anymore. You said you would drop it on the 
previous version. So why it has not been done?




Signed-off-by: Andre Przywara 
---
 xen/arch/arm/vgic-v3.c | 70 ++
 1 file changed, 65 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 38c123c..6dbdb2e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* We have not implemented LPI's, read zero */
-goto read_as_zero_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto read_as_zero_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+*r = vgic_reg32_extract(!!(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED),
+info);
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+return 1;
+}

 case VREG32(GICR_IIDR):
 if ( dabt.size != DABT_WORD ) goto bad_width;
@@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 uint64_t typer, aff;

 if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
-/* TBD: Update processor id in [23:8] when ITS support is added */
 aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
 typer = aff;
+/* We use the VCPU ID as the redistributor ID in bits[23:8] */
+typer |= (v->vcpu_id & 0x) << 8;


Why the mask here? This sound like a bug to me if vcpu_id does not fit 
it and you would make it worst by the mask.


But this is already addressed by max_vcpus in the vgic_ops. So please 
drop the pointless mask.


Lastly, I would have expected to try to address my remark everywhere 
regarding hardcoding offset. In this case,




 if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
 typer |= GICR_TYPER_LAST;

+if ( v->domain->arch.vgic.has_its )
+typer |= GICR_TYPER_PLPIS;
+
 *r = vgic_reg64_extract(typer, info);

 return 1;
@@ -426,6 +441,28 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
 return reg;
 }

+static void vgic_vcpu_enable_lpis(struct vcpu *v)
+{
+uint64_t reg = v->domain->arch.vgic.rdist_propbase;
+unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
+
+/* rdists_enabled is protected by the domain lock. */
+ASSERT(spin_is_locked(>domain->arch.vgic.lock));
+
+if ( nr_lpis < LPI_OFFSET )
+nr_lpis = 0;
+else
+nr_lpis -= LPI_OFFSET;
+
+if ( !v->domain->arch.vgic.rdists_enabled )
+{
+v->domain->arch.vgic.nr_lpis = nr_lpis;
+v->domain->arch.vgic.rdists_enabled = true;
+}
+
+v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
   uint32_t gicr_reg,
   register_t r)
@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */


Getting back to the locking. I don't see any place where we get the 
domain vgic lock before vCPU vgic lock. So this raises the question why 
this ordering and not moving this lock into vgic_vcpu_enable_lpis.


At least this require documentation in the code and explanation in the 
commit message.



+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);

[Xen-devel] [PATCH v9 12/28] ARM: vGIC: advertise LPI support

2017-05-11 Thread Andre Przywara
To let a guest know about the availability of virtual LPIs, set the
respective bits in the virtual GIC registers and let a guest control
the LPI enable bit.
Only report the LPI capability if the host has initialized at least
one ITS.
This removes a "TBD" comment, as we now populate the processor number
in the GICR_TYPE register.
Advertise 24 bits worth of LPIs to the guest.

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/vgic-v3.c | 70 ++
 1 file changed, 65 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 38c123c..6dbdb2e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* We have not implemented LPI's, read zero */
-goto read_as_zero_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto read_as_zero_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+*r = vgic_reg32_extract(!!(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED),
+info);
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+return 1;
+}
 
 case VREG32(GICR_IIDR):
 if ( dabt.size != DABT_WORD ) goto bad_width;
@@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 uint64_t typer, aff;
 
 if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
-/* TBD: Update processor id in [23:8] when ITS support is added */
 aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
 typer = aff;
+/* We use the VCPU ID as the redistributor ID in bits[23:8] */
+typer |= (v->vcpu_id & 0x) << 8;
 
 if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
 typer |= GICR_TYPER_LAST;
 
+if ( v->domain->arch.vgic.has_its )
+typer |= GICR_TYPER_PLPIS;
+
 *r = vgic_reg64_extract(typer, info);
 
 return 1;
@@ -426,6 +441,28 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
 return reg;
 }
 
+static void vgic_vcpu_enable_lpis(struct vcpu *v)
+{
+uint64_t reg = v->domain->arch.vgic.rdist_propbase;
+unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
+
+/* rdists_enabled is protected by the domain lock. */
+ASSERT(spin_is_locked(>domain->arch.vgic.lock));
+
+if ( nr_lpis < LPI_OFFSET )
+nr_lpis = 0;
+else
+nr_lpis -= LPI_OFFSET;
+
+if ( !v->domain->arch.vgic.rdists_enabled )
+{
+v->domain->arch.vgic.nr_lpis = nr_lpis;
+v->domain->arch.vgic.rdists_enabled = true;
+}
+
+v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
   uint32_t gicr_reg,
   register_t r)
@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+vgic_unlock(v);
+
+return 1;
+}
 
 case VREG32(GICR_IIDR):
 /* RO */
@@ -1058,6 +1113,11 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 typer = ((ncpus - 1) << GICD_TYPE_CPUS_SHIFT |
  DIV_ROUND_UP(v->domain->arch.vgic.nr_spis, 32));
 
+if ( v->domain->arch.vgic.has_its )
+{
+typer |= GICD_TYPE_LPIS;
+irq_bits = v->domain->arch.vgic.intid_bits;
+}
 typer |= (irq_bits - 1) << GICD_TYPE_ID_BITS_SHIFT;
 
 *r = vgic_reg32_extract(typer, info);
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel