Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-03-03 Thread Julien Grall

Hi Jan,

On 03/03/17 07:58, Jan Beulich wrote:

On 02.03.17 at 21:56,  wrote:

Ping? I'd like the question to be sorted out before Andre is sending a
new version.

On 02/15/2017 09:25 PM, Stefano Stabellini wrote:

On Wed, 15 Feb 2017, Julien Grall wrote:

Hi Stefano,

On 14/02/17 21:00, Stefano Stabellini wrote:

On Mon, 30 Jan 2017, Andre Przywara wrote:

+/*
+ * Handle incoming LPIs, which are a bit special, because they are
potentially
+ * numerous and also only get injected into guests. Treat them specially
here,
+ * by just looking up their target vCPU and virtual LPI number and hand
it
+ * over to the injection function.
+ */
+void do_LPI(unsigned int lpi)
+{
+struct domain *d;
+union host_lpi *hlpip, hlpi;
+struct vcpu *vcpu;
+
+WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+hlpip = gic_get_host_lpi(lpi);
+if ( !hlpip )
+return;
+
+hlpi.data = read_u64_atomic(&hlpip->data);
+
+/* We may have mapped more host LPIs than the guest actually asked
for. */
+if ( !hlpi.virt_lpi )
+return;
+
+d = get_domain_by_id(hlpi.dom_id);
+if ( !d )
+return;
+
+if ( hlpi.vcpu_id >= d->max_vcpus )
+{
+put_domain(d);
+return;
+}
+
+vcpu = d->vcpu[hlpi.vcpu_id];
+
+put_domain(d);
+
+vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);


put_domain should be here


Why? I don't even understand why we would need to take a reference on the
domain for LPIs. Would not it be enough to use rcu_lock_domain_by_id here?


I think that rcu_lock_domain_by_id would also work, but similarly we
would need to call rcu_unlock here.

To be honest, I don't know exactly in which cases get_domain should be
used instead of rcu_lock_domain_by_id.


Aiui get_domain() is needed when you want to retain the reference
across an operation that may involved blocking/scheduling. The RCU
variant should be sufficient whenever you only need to make sure
the domain won't go away for the duration of (a portion of) a
function, since final domain destruction gets carried out from an
RCU callback.


Thank you for explanation. I think it makes sense. There will be no 
scheduling or softirq_pending involves in do_LPI so using 
rcu_lock_domain_by_id seems more suitable here.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-03-02 Thread Jan Beulich
>>> On 02.03.17 at 21:56,  wrote:
> Ping? I'd like the question to be sorted out before Andre is sending a 
> new version.
> 
> On 02/15/2017 09:25 PM, Stefano Stabellini wrote:
>> On Wed, 15 Feb 2017, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 14/02/17 21:00, Stefano Stabellini wrote:
 On Mon, 30 Jan 2017, Andre Przywara wrote:
> +/*
> + * Handle incoming LPIs, which are a bit special, because they are
> potentially
> + * numerous and also only get injected into guests. Treat them specially
> here,
> + * by just looking up their target vCPU and virtual LPI number and hand
> it
> + * over to the injection function.
> + */
> +void do_LPI(unsigned int lpi)
> +{
> +struct domain *d;
> +union host_lpi *hlpip, hlpi;
> +struct vcpu *vcpu;
> +
> +WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
> +
> +hlpip = gic_get_host_lpi(lpi);
> +if ( !hlpip )
> +return;
> +
> +hlpi.data = read_u64_atomic(&hlpip->data);
> +
> +/* We may have mapped more host LPIs than the guest actually asked
> for. */
> +if ( !hlpi.virt_lpi )
> +return;
> +
> +d = get_domain_by_id(hlpi.dom_id);
> +if ( !d )
> +return;
> +
> +if ( hlpi.vcpu_id >= d->max_vcpus )
> +{
> +put_domain(d);
> +return;
> +}
> +
> +vcpu = d->vcpu[hlpi.vcpu_id];
> +
> +put_domain(d);
> +
> +vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);

 put_domain should be here
>>>
>>> Why? I don't even understand why we would need to take a reference on the
>>> domain for LPIs. Would not it be enough to use rcu_lock_domain_by_id here?
>>
>> I think that rcu_lock_domain_by_id would also work, but similarly we
>> would need to call rcu_unlock here.
>>
>> To be honest, I don't know exactly in which cases get_domain should be
>> used instead of rcu_lock_domain_by_id.

Aiui get_domain() is needed when you want to retain the reference
across an operation that may involved blocking/scheduling. The RCU
variant should be sufficient whenever you only need to make sure
the domain won't go away for the duration of (a portion of) a
function, since final domain destruction gets carried out from an
RCU callback.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-03-02 Thread Julien Grall

Hi,

Ping? I'd like the question to be sorted out before Andre is sending a 
new version.


On 02/15/2017 09:25 PM, Stefano Stabellini wrote:

On Wed, 15 Feb 2017, Julien Grall wrote:

Hi Stefano,

On 14/02/17 21:00, Stefano Stabellini wrote:

On Mon, 30 Jan 2017, Andre Przywara wrote:

+/*
+ * Handle incoming LPIs, which are a bit special, because they are
potentially
+ * numerous and also only get injected into guests. Treat them specially
here,
+ * by just looking up their target vCPU and virtual LPI number and hand
it
+ * over to the injection function.
+ */
+void do_LPI(unsigned int lpi)
+{
+struct domain *d;
+union host_lpi *hlpip, hlpi;
+struct vcpu *vcpu;
+
+WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+hlpip = gic_get_host_lpi(lpi);
+if ( !hlpip )
+return;
+
+hlpi.data = read_u64_atomic(&hlpip->data);
+
+/* We may have mapped more host LPIs than the guest actually asked
for. */
+if ( !hlpi.virt_lpi )
+return;
+
+d = get_domain_by_id(hlpi.dom_id);
+if ( !d )
+return;
+
+if ( hlpi.vcpu_id >= d->max_vcpus )
+{
+put_domain(d);
+return;
+}
+
+vcpu = d->vcpu[hlpi.vcpu_id];
+
+put_domain(d);
+
+vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);


put_domain should be here


Why? I don't even understand why we would need to take a reference on the
domain for LPIs. Would not it be enough to use rcu_lock_domain_by_id here?


I think that rcu_lock_domain_by_id would also work, but similarly we
would need to call rcu_unlock here.

To be honest, I don't know exactly in which cases get_domain should be
used instead of rcu_lock_domain_by_id.

CC'ing the x86 guys that might know the answer.



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-02-15 Thread Stefano Stabellini
On Wed, 15 Feb 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 14/02/17 21:00, Stefano Stabellini wrote:
> > On Mon, 30 Jan 2017, Andre Przywara wrote:
> > > +/*
> > > + * Handle incoming LPIs, which are a bit special, because they are
> > > potentially
> > > + * numerous and also only get injected into guests. Treat them specially
> > > here,
> > > + * by just looking up their target vCPU and virtual LPI number and hand
> > > it
> > > + * over to the injection function.
> > > + */
> > > +void do_LPI(unsigned int lpi)
> > > +{
> > > +struct domain *d;
> > > +union host_lpi *hlpip, hlpi;
> > > +struct vcpu *vcpu;
> > > +
> > > +WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
> > > +
> > > +hlpip = gic_get_host_lpi(lpi);
> > > +if ( !hlpip )
> > > +return;
> > > +
> > > +hlpi.data = read_u64_atomic(&hlpip->data);
> > > +
> > > +/* We may have mapped more host LPIs than the guest actually asked
> > > for. */
> > > +if ( !hlpi.virt_lpi )
> > > +return;
> > > +
> > > +d = get_domain_by_id(hlpi.dom_id);
> > > +if ( !d )
> > > +return;
> > > +
> > > +if ( hlpi.vcpu_id >= d->max_vcpus )
> > > +{
> > > +put_domain(d);
> > > +return;
> > > +}
> > > +
> > > +vcpu = d->vcpu[hlpi.vcpu_id];
> > > +
> > > +put_domain(d);
> > > +
> > > +vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);
> > 
> > put_domain should be here
> 
> Why? I don't even understand why we would need to take a reference on the
> domain for LPIs. Would not it be enough to use rcu_lock_domain_by_id here?

I think that rcu_lock_domain_by_id would also work, but similarly we
would need to call rcu_unlock here.

To be honest, I don't know exactly in which cases get_domain should be
used instead of rcu_lock_domain_by_id.

CC'ing the x86 guys that might know the answer.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-02-15 Thread Julien Grall

Hi Andre,

On 30/01/17 18:31, Andre Przywara wrote:

Upon receiving an LPI, we need to find the right VCPU and virtual IRQ
number to get this IRQ injected.
Iterate our two-level LPI table to find this information quickly when
the host takes an LPI. Call the existing injection function to let the
GIC emulation deal with this interrupt.

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/gic-v3-lpi.c | 41 +
 xen/arch/arm/gic.c|  6 --
 xen/include/asm-arm/irq.h |  8 
 3 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 8f6e7f3..d270053 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -86,6 +86,47 @@ uint64_t gicv3_get_redist_address(int cpu, bool use_pta)
 return per_cpu(redist_id, cpu) << 16;
 }

+/*
+ * Handle incoming LPIs, which are a bit special, because they are potentially
+ * numerous and also only get injected into guests. Treat them specially here,
+ * by just looking up their target vCPU and virtual LPI number and hand it
+ * over to the injection function.
+ */
+void do_LPI(unsigned int lpi)
+{
+struct domain *d;
+union host_lpi *hlpip, hlpi;
+struct vcpu *vcpu;
+
+WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+hlpip = gic_get_host_lpi(lpi);
+if ( !hlpip )
+return;
+
+hlpi.data = read_u64_atomic(&hlpip->data);
+
+/* We may have mapped more host LPIs than the guest actually asked for. */


Another way, is the interrupt has been received at the same time the 
guest is configuring it. What will happen if the interrupt is lost?



+if ( !hlpi.virt_lpi )
+return;
+
+d = get_domain_by_id(hlpi.dom_id);
+if ( !d )
+return;
+
+if ( hlpi.vcpu_id >= d->max_vcpus )


A comment would be certainly useful here to explain why this check.


+{
+put_domain(d);
+return;
+}
+
+vcpu = d->vcpu[hlpi.vcpu_id];
+
+put_domain(d);
+
+vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);
+}
+
 uint64_t gicv3_lpi_allocate_pendtable(void)
 {
 uint64_t reg;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index bd3c032..7286e5d 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -700,8 +700,10 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq)
 local_irq_enable();
 do_IRQ(regs, irq, is_fiq);
 local_irq_disable();
-}
-else if (unlikely(irq < 16))
+} else if ( is_lpi(irq) )


Coding style:

}
else if (...)
{
}
else if (...)


+{
+do_LPI(irq);


I really don't want to see GICv3 specific code called in common code. 
Please introduce a specific callback in gic_hw_operations.



+} else if ( unlikely(irq < 16) )
 {
 do_sgi(regs, irq);
 }
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index 8f7a167..ee47de8 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -34,6 +34,14 @@ struct irq_desc *__irq_to_desc(int irq);

 void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq);

+#ifdef CONFIG_HAS_ITS
+void do_LPI(unsigned int irq);
+#else
+static inline void do_LPI(unsigned int irq)
+{
+}
+#endif
+


This would avoid such ugly hack where do_LPI is define in gic-v3-its.c 
but declared in irq.h.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-02-15 Thread Julien Grall

Hi Stefano,

On 14/02/17 21:00, Stefano Stabellini wrote:

On Mon, 30 Jan 2017, Andre Przywara wrote:

+/*
+ * Handle incoming LPIs, which are a bit special, because they are potentially
+ * numerous and also only get injected into guests. Treat them specially here,
+ * by just looking up their target vCPU and virtual LPI number and hand it
+ * over to the injection function.
+ */
+void do_LPI(unsigned int lpi)
+{
+struct domain *d;
+union host_lpi *hlpip, hlpi;
+struct vcpu *vcpu;
+
+WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+hlpip = gic_get_host_lpi(lpi);
+if ( !hlpip )
+return;
+
+hlpi.data = read_u64_atomic(&hlpip->data);
+
+/* We may have mapped more host LPIs than the guest actually asked for. */
+if ( !hlpi.virt_lpi )
+return;
+
+d = get_domain_by_id(hlpi.dom_id);
+if ( !d )
+return;
+
+if ( hlpi.vcpu_id >= d->max_vcpus )
+{
+put_domain(d);
+return;
+}
+
+vcpu = d->vcpu[hlpi.vcpu_id];
+
+put_domain(d);
+
+vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);


put_domain should be here


Why? I don't even understand why we would need to take a reference on 
the domain for LPIs. Would not it be enough to use rcu_lock_domain_by_id 
here?


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-02-14 Thread Stefano Stabellini
On Mon, 30 Jan 2017, Andre Przywara wrote:
> Upon receiving an LPI, we need to find the right VCPU and virtual IRQ
> number to get this IRQ injected.
> Iterate our two-level LPI table to find this information quickly when
> the host takes an LPI. Call the existing injection function to let the
> GIC emulation deal with this interrupt.
> 
> Signed-off-by: Andre Przywara 
> ---
>  xen/arch/arm/gic-v3-lpi.c | 41 +
>  xen/arch/arm/gic.c|  6 --
>  xen/include/asm-arm/irq.h |  8 
>  3 files changed, 53 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index 8f6e7f3..d270053 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -86,6 +86,47 @@ uint64_t gicv3_get_redist_address(int cpu, bool use_pta)
>  return per_cpu(redist_id, cpu) << 16;
>  }
>  
> +/*
> + * Handle incoming LPIs, which are a bit special, because they are 
> potentially
> + * numerous and also only get injected into guests. Treat them specially 
> here,
> + * by just looking up their target vCPU and virtual LPI number and hand it
> + * over to the injection function.
> + */
> +void do_LPI(unsigned int lpi)
> +{
> +struct domain *d;
> +union host_lpi *hlpip, hlpi;
> +struct vcpu *vcpu;
> +
> +WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
> +
> +hlpip = gic_get_host_lpi(lpi);
> +if ( !hlpip )
> +return;
> +
> +hlpi.data = read_u64_atomic(&hlpip->data);
> +
> +/* We may have mapped more host LPIs than the guest actually asked for. 
> */
> +if ( !hlpi.virt_lpi )
> +return;
> +
> +d = get_domain_by_id(hlpi.dom_id);
> +if ( !d )
> +return;
> +
> +if ( hlpi.vcpu_id >= d->max_vcpus )
> +{
> +put_domain(d);
> +return;
> +}
> +
> +vcpu = d->vcpu[hlpi.vcpu_id];
> +
> +put_domain(d);
> +
> +vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);

put_domain should be here


> +}
> +
>  uint64_t gicv3_lpi_allocate_pendtable(void)
>  {
>  uint64_t reg;
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index bd3c032..7286e5d 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -700,8 +700,10 @@ void gic_interrupt(struct cpu_user_regs *regs, int 
> is_fiq)
>  local_irq_enable();
>  do_IRQ(regs, irq, is_fiq);
>  local_irq_disable();
> -}
> -else if (unlikely(irq < 16))
> +} else if ( is_lpi(irq) )
> +{
> +do_LPI(irq);
> +} else if ( unlikely(irq < 16) )
>  {
>  do_sgi(regs, irq);
>  }
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index 8f7a167..ee47de8 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -34,6 +34,14 @@ struct irq_desc *__irq_to_desc(int irq);
>  
>  void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq);
>  
> +#ifdef CONFIG_HAS_ITS
> +void do_LPI(unsigned int irq);
> +#else
> +static inline void do_LPI(unsigned int irq)
> +{
> +}
> +#endif
> +
>  #define domain_pirq_to_irq(d, pirq) (pirq)
>  
>  bool_t is_assignable_irq(unsigned int irq);
> -- 
> 2.9.0
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 11/28] ARM: GICv3: forward pending LPIs to guests

2017-01-30 Thread Andre Przywara
Upon receiving an LPI, we need to find the right VCPU and virtual IRQ
number to get this IRQ injected.
Iterate our two-level LPI table to find this information quickly when
the host takes an LPI. Call the existing injection function to let the
GIC emulation deal with this interrupt.

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/gic-v3-lpi.c | 41 +
 xen/arch/arm/gic.c|  6 --
 xen/include/asm-arm/irq.h |  8 
 3 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 8f6e7f3..d270053 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -86,6 +86,47 @@ uint64_t gicv3_get_redist_address(int cpu, bool use_pta)
 return per_cpu(redist_id, cpu) << 16;
 }
 
+/*
+ * Handle incoming LPIs, which are a bit special, because they are potentially
+ * numerous and also only get injected into guests. Treat them specially here,
+ * by just looking up their target vCPU and virtual LPI number and hand it
+ * over to the injection function.
+ */
+void do_LPI(unsigned int lpi)
+{
+struct domain *d;
+union host_lpi *hlpip, hlpi;
+struct vcpu *vcpu;
+
+WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+hlpip = gic_get_host_lpi(lpi);
+if ( !hlpip )
+return;
+
+hlpi.data = read_u64_atomic(&hlpip->data);
+
+/* We may have mapped more host LPIs than the guest actually asked for. */
+if ( !hlpi.virt_lpi )
+return;
+
+d = get_domain_by_id(hlpi.dom_id);
+if ( !d )
+return;
+
+if ( hlpi.vcpu_id >= d->max_vcpus )
+{
+put_domain(d);
+return;
+}
+
+vcpu = d->vcpu[hlpi.vcpu_id];
+
+put_domain(d);
+
+vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);
+}
+
 uint64_t gicv3_lpi_allocate_pendtable(void)
 {
 uint64_t reg;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index bd3c032..7286e5d 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -700,8 +700,10 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq)
 local_irq_enable();
 do_IRQ(regs, irq, is_fiq);
 local_irq_disable();
-}
-else if (unlikely(irq < 16))
+} else if ( is_lpi(irq) )
+{
+do_LPI(irq);
+} else if ( unlikely(irq < 16) )
 {
 do_sgi(regs, irq);
 }
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index 8f7a167..ee47de8 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -34,6 +34,14 @@ struct irq_desc *__irq_to_desc(int irq);
 
 void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq);
 
+#ifdef CONFIG_HAS_ITS
+void do_LPI(unsigned int irq);
+#else
+static inline void do_LPI(unsigned int irq)
+{
+}
+#endif
+
 #define domain_pirq_to_irq(d, pirq) (pirq)
 
 bool_t is_assignable_irq(unsigned int irq);
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel