Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-03-14 Thread David Gibson
On Thu, Mar 14, 2019 at 10:24:49PM +0100, Cédric Le Goater wrote:
> On 3/14/19 3:11 AM, David Gibson wrote:
> > On Wed, Mar 13, 2019 at 11:43:54AM +0100, Cédric Le Goater wrote:
> >> On 3/12/19 11:26 AM, David Gibson wrote:
> >>> On Mon, Mar 11, 2019 at 06:32:05PM +0100, Cédric Le Goater wrote:
>  On 2/26/19 12:22 AM, David Gibson wrote:
> > On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
> >>> [snip]
> >> +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, 
> >> XiveEAS *eas,
> >> +   Error **errp)
> >> +{
> >> +uint32_t end_idx;
> >> +uint32_t end_blk;
> >> +uint32_t eisn;
> >> +uint8_t priority;
> >> +uint32_t server;
> >> +uint64_t kvm_src;
> >> +Error *local_err = NULL;
> >> +
> >> +/*
> >> + * No need to set a MASKED source, this is the default state after
> >> + * reset.
> >
> > I don't quite follow this comment, why is there no need to call a
> > MASKED source?
> 
>  because MASKED is the default state in which KVM initializes the IRQ. I 
>  will
>  clarify.
> >>>
> >>> I believe it's possible - though rare - to process an incoming
> >>> migration on an established VM which isn't in fresh reset state.  So
> >>> it's best not to rely on that.
> >>>
> >> +static void xive_esb_trigger(XiveSource *xsrc, int srcno)
> >> +{
> >> +unsigned long addr = (unsigned long) xsrc->esb_mmap +
> >> +xive_source_esb_page(xsrc, srcno);
> >> +
> >> +*((uint64_t *) addr) = 0x0;
> >> +}
> >
> > Also.. aren't some of these register accesses likely to need memory
> > barriers?
> 
>  AIUI, these are CI pages. So we shouldn't need barriers.
> >>>
> >>> CI doesn't negate the need for barriers, althugh it might change the
> >>> type you need.  At the very least you need a compiler barrier to stop
> >>> it re-ordering the access, but you can also have in-cpu store and load
> >>> queues.
> >>>
> >>
> >> ok. So I will need to add some smp_r/wmb() 
> > 
> > No, smp_[rw]mb() is for cases where it's strictly about cpu vs. cpu
> > ordering.  Here it's cpu vs. IO ordering so you need plain [rw]mb().
> 
> I don't see any in QEMU ?

Ah, my mistake.  I was mixing up the kernel atomics and the qemu
atomics.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-03-14 Thread Cédric Le Goater
On 3/14/19 3:11 AM, David Gibson wrote:
> On Wed, Mar 13, 2019 at 11:43:54AM +0100, Cédric Le Goater wrote:
>> On 3/12/19 11:26 AM, David Gibson wrote:
>>> On Mon, Mar 11, 2019 at 06:32:05PM +0100, Cédric Le Goater wrote:
 On 2/26/19 12:22 AM, David Gibson wrote:
> On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
>>> [snip]
>> +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, 
>> XiveEAS *eas,
>> +   Error **errp)
>> +{
>> +uint32_t end_idx;
>> +uint32_t end_blk;
>> +uint32_t eisn;
>> +uint8_t priority;
>> +uint32_t server;
>> +uint64_t kvm_src;
>> +Error *local_err = NULL;
>> +
>> +/*
>> + * No need to set a MASKED source, this is the default state after
>> + * reset.
>
> I don't quite follow this comment, why is there no need to call a
> MASKED source?

 because MASKED is the default state in which KVM initializes the IRQ. I 
 will
 clarify.
>>>
>>> I believe it's possible - though rare - to process an incoming
>>> migration on an established VM which isn't in fresh reset state.  So
>>> it's best not to rely on that.
>>>
>> +static void xive_esb_trigger(XiveSource *xsrc, int srcno)
>> +{
>> +unsigned long addr = (unsigned long) xsrc->esb_mmap +
>> +xive_source_esb_page(xsrc, srcno);
>> +
>> +*((uint64_t *) addr) = 0x0;
>> +}
>
> Also.. aren't some of these register accesses likely to need memory
> barriers?

 AIUI, these are CI pages. So we shouldn't need barriers.
>>>
>>> CI doesn't negate the need for barriers, althugh it might change the
>>> type you need.  At the very least you need a compiler barrier to stop
>>> it re-ordering the access, but you can also have in-cpu store and load
>>> queues.
>>>
>>
>> ok. So I will need to add some smp_r/wmb() 
> 
> No, smp_[rw]mb() is for cases where it's strictly about cpu vs. cpu
> ordering.  Here it's cpu vs. IO ordering so you need plain [rw]mb().

I don't see any in QEMU ?

C. 




Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-03-13 Thread David Gibson
On Wed, Mar 13, 2019 at 11:43:54AM +0100, Cédric Le Goater wrote:
> On 3/12/19 11:26 AM, David Gibson wrote:
> > On Mon, Mar 11, 2019 at 06:32:05PM +0100, Cédric Le Goater wrote:
> >> On 2/26/19 12:22 AM, David Gibson wrote:
> >>> On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
> > [snip]
>  +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, 
>  XiveEAS *eas,
>  +   Error **errp)
>  +{
>  +uint32_t end_idx;
>  +uint32_t end_blk;
>  +uint32_t eisn;
>  +uint8_t priority;
>  +uint32_t server;
>  +uint64_t kvm_src;
>  +Error *local_err = NULL;
>  +
>  +/*
>  + * No need to set a MASKED source, this is the default state after
>  + * reset.
> >>>
> >>> I don't quite follow this comment, why is there no need to call a
> >>> MASKED source?
> >>
> >> because MASKED is the default state in which KVM initializes the IRQ. I 
> >> will
> >> clarify.
> > 
> > I believe it's possible - though rare - to process an incoming
> > migration on an established VM which isn't in fresh reset state.  So
> > it's best not to rely on that.
> > 
>  +static void xive_esb_trigger(XiveSource *xsrc, int srcno)
>  +{
>  +unsigned long addr = (unsigned long) xsrc->esb_mmap +
>  +xive_source_esb_page(xsrc, srcno);
>  +
>  +*((uint64_t *) addr) = 0x0;
>  +}
> >>>
> >>> Also.. aren't some of these register accesses likely to need memory
> >>> barriers?
> >>
> >> AIUI, these are CI pages. So we shouldn't need barriers.
> > 
> > CI doesn't negate the need for barriers, althugh it might change the
> > type you need.  At the very least you need a compiler barrier to stop
> > it re-ordering the access, but you can also have in-cpu store and load
> > queues.
> > 
> 
> ok. So I will need to add some smp_r/wmb() 

No, smp_[rw]mb() is for cases where it's strictly about cpu vs. cpu
ordering.  Here it's cpu vs. IO ordering so you need plain [rw]mb().

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-03-13 Thread Cédric Le Goater
On 3/12/19 11:26 AM, David Gibson wrote:
> On Mon, Mar 11, 2019 at 06:32:05PM +0100, Cédric Le Goater wrote:
>> On 2/26/19 12:22 AM, David Gibson wrote:
>>> On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
> [snip]
 +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, 
 XiveEAS *eas,
 +   Error **errp)
 +{
 +uint32_t end_idx;
 +uint32_t end_blk;
 +uint32_t eisn;
 +uint8_t priority;
 +uint32_t server;
 +uint64_t kvm_src;
 +Error *local_err = NULL;
 +
 +/*
 + * No need to set a MASKED source, this is the default state after
 + * reset.
>>>
>>> I don't quite follow this comment, why is there no need to call a
>>> MASKED source?
>>
>> because MASKED is the default state in which KVM initializes the IRQ. I will
>> clarify.
> 
> I believe it's possible - though rare - to process an incoming
> migration on an established VM which isn't in fresh reset state.  So
> it's best not to rely on that.
> 
 +static void xive_esb_trigger(XiveSource *xsrc, int srcno)
 +{
 +unsigned long addr = (unsigned long) xsrc->esb_mmap +
 +xive_source_esb_page(xsrc, srcno);
 +
 +*((uint64_t *) addr) = 0x0;
 +}
>>>
>>> Also.. aren't some of these register accesses likely to need memory
>>> barriers?
>>
>> AIUI, these are CI pages. So we shouldn't need barriers.
> 
> CI doesn't negate the need for barriers, althugh it might change the
> type you need.  At the very least you need a compiler barrier to stop
> it re-ordering the access, but you can also have in-cpu store and load
> queues.
> 

ok. So I will need to add some smp_r/wmb() 

Thanks,

C.



Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-03-12 Thread David Gibson
On Mon, Mar 11, 2019 at 06:32:05PM +0100, Cédric Le Goater wrote:
> On 2/26/19 12:22 AM, David Gibson wrote:
> > On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
[snip]
> >> +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, 
> >> XiveEAS *eas,
> >> +   Error **errp)
> >> +{
> >> +uint32_t end_idx;
> >> +uint32_t end_blk;
> >> +uint32_t eisn;
> >> +uint8_t priority;
> >> +uint32_t server;
> >> +uint64_t kvm_src;
> >> +Error *local_err = NULL;
> >> +
> >> +/*
> >> + * No need to set a MASKED source, this is the default state after
> >> + * reset.
> > 
> > I don't quite follow this comment, why is there no need to call a
> > MASKED source?
> 
> because MASKED is the default state in which KVM initializes the IRQ. I will
> clarify.

I believe it's possible - though rare - to process an incoming
migration on an established VM which isn't in fresh reset state.  So
it's best not to rely on that.

> >> +static void xive_esb_trigger(XiveSource *xsrc, int srcno)
> >> +{
> >> +unsigned long addr = (unsigned long) xsrc->esb_mmap +
> >> +xive_source_esb_page(xsrc, srcno);
> >> +
> >> +*((uint64_t *) addr) = 0x0;
> >> +}
> > 
> > Also.. aren't some of these register accesses likely to need memory
> > barriers?
> 
> AIUI, these are CI pages. So we shouldn't need barriers.

CI doesn't negate the need for barriers, althugh it might change the
type you need.  At the very least you need a compiler barrier to stop
it re-ordering the access, but you can also have in-cpu store and load
queues.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-03-11 Thread Cédric Le Goater
On 2/26/19 12:22 AM, David Gibson wrote:
> On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
>> XIVE hcalls are all redirected to QEMU as none are on a fast path.
>> When necessary, QEMU invokes KVM through specific ioctls to perform
>> host operations. QEMU should have done the necessary checks before
>> calling KVM and, in case of failure, H_HARDWARE is simply returned.
>>
>> H_INT_ESB is a special case that could have been handled under KVM
>> but the impact on performance was low when under QEMU. Here are some
>> figures :
>>
>> kernel irqchip  OFF  ON
>> H_INT_ESBKVM   QEMU
>>
>> rtl8139 (LSI )  1.19 1.24  1.23  Gbits/sec
>> virtio 31.8042.30   --   Gbits/sec
>>
>> Signed-off-by: Cédric Le Goater 
>> ---
>>  include/hw/ppc/spapr_xive.h |  15 +++
>>  hw/intc/spapr_xive.c|  87 +++--
>>  hw/intc/spapr_xive_kvm.c| 184 
>>  3 files changed, 278 insertions(+), 8 deletions(-)
>>
>> diff --git a/include/hw/ppc/spapr_xive.h b/include/hw/ppc/spapr_xive.h
>> index ab6732b14a02..749c6cbc2c56 100644
>> --- a/include/hw/ppc/spapr_xive.h
>> +++ b/include/hw/ppc/spapr_xive.h
>> @@ -55,9 +55,24 @@ void spapr_xive_set_tctx_os_cam(XiveTCTX *tctx);
>>  void spapr_xive_mmio_set_enabled(sPAPRXive *xive, bool enable);
>>  void spapr_xive_map_mmio(sPAPRXive *xive);
>>  
>> +int spapr_xive_end_to_target(uint8_t end_blk, uint32_t end_idx,
>> + uint32_t *out_server, uint8_t *out_prio);
>> +
>>  /*
>>   * KVM XIVE device helpers
>>   */
>>  void kvmppc_xive_connect(sPAPRXive *xive, Error **errp);
>> +void kvmppc_xive_reset(sPAPRXive *xive, Error **errp);
>> +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, XiveEAS 
>> *eas,
>> +   Error **errp);
>> +void kvmppc_xive_sync_source(sPAPRXive *xive, uint32_t lisn, Error **errp);
>> +uint64_t kvmppc_xive_esb_rw(XiveSource *xsrc, int srcno, uint32_t offset,
>> +uint64_t data, bool write);
>> +void kvmppc_xive_set_queue_config(sPAPRXive *xive, uint8_t end_blk,
>> + uint32_t end_idx, XiveEND *end,
>> + Error **errp);
>> +void kvmppc_xive_get_queue_config(sPAPRXive *xive, uint8_t end_blk,
>> + uint32_t end_idx, XiveEND *end,
>> + Error **errp);
>>  
>>  #endif /* PPC_SPAPR_XIVE_H */
>> diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
>> index c24d649e3668..3db24391e31c 100644
>> --- a/hw/intc/spapr_xive.c
>> +++ b/hw/intc/spapr_xive.c
>> @@ -86,6 +86,19 @@ static int spapr_xive_target_to_nvt(uint32_t target,
>>   * sPAPR END indexing uses a simple mapping of the CPU vcpu_id, 8
>>   * priorities per CPU
>>   */
>> +int spapr_xive_end_to_target(uint8_t end_blk, uint32_t end_idx,
>> + uint32_t *out_server, uint8_t *out_prio)
>> +{
> 
> Since you don't support irq blocks as yet, should this error out
> rather than ignoring if end_blk != 0?

yes we could. I will add a test against SPAPR_XIVE_BLOCK which is the value 
of the sPAPR block ID. I would like to be able to track where it is used 
even if constant.  

> 
>> +if (out_server) {
>> +*out_server = end_idx >> 3;
>> +}
>> +
>> +if (out_prio) {
>> +*out_prio = end_idx & 0x7;
>> +}
>> +return 0;
>> +}
>> +
>>  static void spapr_xive_cpu_to_end(PowerPCCPU *cpu, uint8_t prio,
>>uint8_t *out_end_blk, uint32_t 
>> *out_end_idx)
>>  {
>> @@ -792,6 +805,16 @@ static target_ulong h_int_set_source_config(PowerPCCPU 
>> *cpu,
>>  new_eas.w = xive_set_field64(EAS_END_DATA, new_eas.w, eisn);
>>  }
>>  
>> +if (kvm_irqchip_in_kernel()) {
>> +Error *local_err = NULL;
>> +
>> +kvmppc_xive_set_source_config(xive, lisn, _eas, _err);
>> +if (local_err) {
>> +error_report_err(local_err);
>> +return H_HARDWARE;
>> +}
>> +}
>> +
>>  out:
>>  xive->eat[lisn] = new_eas;
>>  return H_SUCCESS;
>> @@ -1097,6 +1120,16 @@ static target_ulong h_int_set_queue_config(PowerPCCPU 
>> *cpu,
>>   */
>>  
>>  out:
>> +if (kvm_irqchip_in_kernel()) {
>> +Error *local_err = NULL;
>> +
>> +kvmppc_xive_set_queue_config(xive, end_blk, end_idx, , 
>> _err);
>> +if (local_err) {
>> +error_report_err(local_err);
>> +return H_HARDWARE;
>> +}
>> +}
>> +
>>  /* Update END */
>>  memcpy(>endt[end_idx], , sizeof(XiveEND));
>>  return H_SUCCESS;
>> @@ -1189,6 +1222,16 @@ static target_ulong h_int_get_queue_config(PowerPCCPU 
>> *cpu,
>>  args[2] = 0;
>>  }
>>  
>> +if (kvm_irqchip_in_kernel()) {
>> +Error *local_err = NULL;
>> +
>> +kvmppc_xive_get_queue_config(xive, end_blk, end_idx, end, 
>> _err);
>> +if 

Re: [Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-02-25 Thread David Gibson
On Fri, Feb 22, 2019 at 02:13:11PM +0100, Cédric Le Goater wrote:
> XIVE hcalls are all redirected to QEMU as none are on a fast path.
> When necessary, QEMU invokes KVM through specific ioctls to perform
> host operations. QEMU should have done the necessary checks before
> calling KVM and, in case of failure, H_HARDWARE is simply returned.
> 
> H_INT_ESB is a special case that could have been handled under KVM
> but the impact on performance was low when under QEMU. Here are some
> figures :
> 
> kernel irqchip  OFF  ON
> H_INT_ESBKVM   QEMU
> 
> rtl8139 (LSI )  1.19 1.24  1.23  Gbits/sec
> virtio 31.8042.30   --   Gbits/sec
> 
> Signed-off-by: Cédric Le Goater 
> ---
>  include/hw/ppc/spapr_xive.h |  15 +++
>  hw/intc/spapr_xive.c|  87 +++--
>  hw/intc/spapr_xive_kvm.c| 184 
>  3 files changed, 278 insertions(+), 8 deletions(-)
> 
> diff --git a/include/hw/ppc/spapr_xive.h b/include/hw/ppc/spapr_xive.h
> index ab6732b14a02..749c6cbc2c56 100644
> --- a/include/hw/ppc/spapr_xive.h
> +++ b/include/hw/ppc/spapr_xive.h
> @@ -55,9 +55,24 @@ void spapr_xive_set_tctx_os_cam(XiveTCTX *tctx);
>  void spapr_xive_mmio_set_enabled(sPAPRXive *xive, bool enable);
>  void spapr_xive_map_mmio(sPAPRXive *xive);
>  
> +int spapr_xive_end_to_target(uint8_t end_blk, uint32_t end_idx,
> + uint32_t *out_server, uint8_t *out_prio);
> +
>  /*
>   * KVM XIVE device helpers
>   */
>  void kvmppc_xive_connect(sPAPRXive *xive, Error **errp);
> +void kvmppc_xive_reset(sPAPRXive *xive, Error **errp);
> +void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, XiveEAS 
> *eas,
> +   Error **errp);
> +void kvmppc_xive_sync_source(sPAPRXive *xive, uint32_t lisn, Error **errp);
> +uint64_t kvmppc_xive_esb_rw(XiveSource *xsrc, int srcno, uint32_t offset,
> +uint64_t data, bool write);
> +void kvmppc_xive_set_queue_config(sPAPRXive *xive, uint8_t end_blk,
> + uint32_t end_idx, XiveEND *end,
> + Error **errp);
> +void kvmppc_xive_get_queue_config(sPAPRXive *xive, uint8_t end_blk,
> + uint32_t end_idx, XiveEND *end,
> + Error **errp);
>  
>  #endif /* PPC_SPAPR_XIVE_H */
> diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
> index c24d649e3668..3db24391e31c 100644
> --- a/hw/intc/spapr_xive.c
> +++ b/hw/intc/spapr_xive.c
> @@ -86,6 +86,19 @@ static int spapr_xive_target_to_nvt(uint32_t target,
>   * sPAPR END indexing uses a simple mapping of the CPU vcpu_id, 8
>   * priorities per CPU
>   */
> +int spapr_xive_end_to_target(uint8_t end_blk, uint32_t end_idx,
> + uint32_t *out_server, uint8_t *out_prio)
> +{

Since you don't support irq blocks as yet, should this error out
rather than ignoring if end_blk != 0?

> +if (out_server) {
> +*out_server = end_idx >> 3;
> +}
> +
> +if (out_prio) {
> +*out_prio = end_idx & 0x7;
> +}
> +return 0;
> +}
> +
>  static void spapr_xive_cpu_to_end(PowerPCCPU *cpu, uint8_t prio,
>uint8_t *out_end_blk, uint32_t 
> *out_end_idx)
>  {
> @@ -792,6 +805,16 @@ static target_ulong h_int_set_source_config(PowerPCCPU 
> *cpu,
>  new_eas.w = xive_set_field64(EAS_END_DATA, new_eas.w, eisn);
>  }
>  
> +if (kvm_irqchip_in_kernel()) {
> +Error *local_err = NULL;
> +
> +kvmppc_xive_set_source_config(xive, lisn, _eas, _err);
> +if (local_err) {
> +error_report_err(local_err);
> +return H_HARDWARE;
> +}
> +}
> +
>  out:
>  xive->eat[lisn] = new_eas;
>  return H_SUCCESS;
> @@ -1097,6 +1120,16 @@ static target_ulong h_int_set_queue_config(PowerPCCPU 
> *cpu,
>   */
>  
>  out:
> +if (kvm_irqchip_in_kernel()) {
> +Error *local_err = NULL;
> +
> +kvmppc_xive_set_queue_config(xive, end_blk, end_idx, , 
> _err);
> +if (local_err) {
> +error_report_err(local_err);
> +return H_HARDWARE;
> +}
> +}
> +
>  /* Update END */
>  memcpy(>endt[end_idx], , sizeof(XiveEND));
>  return H_SUCCESS;
> @@ -1189,6 +1222,16 @@ static target_ulong h_int_get_queue_config(PowerPCCPU 
> *cpu,
>  args[2] = 0;
>  }
>  
> +if (kvm_irqchip_in_kernel()) {
> +Error *local_err = NULL;
> +
> +kvmppc_xive_get_queue_config(xive, end_blk, end_idx, end, 
> _err);
> +if (local_err) {
> +error_report_err(local_err);
> +return H_HARDWARE;
> +}
> +}
> +
>  /* TODO: do we need any locking on the END ? */
>  if (flags & SPAPR_XIVE_END_DEBUG) {
>  /* Load the event queue generation number into the return flags */
> @@ -1341,15 +1384,20 @@ static target_ulong 

[Qemu-devel] [PATCH v2 02/13] spapr/xive: add hcall support when under KVM

2019-02-22 Thread Cédric Le Goater
XIVE hcalls are all redirected to QEMU as none are on a fast path.
When necessary, QEMU invokes KVM through specific ioctls to perform
host operations. QEMU should have done the necessary checks before
calling KVM and, in case of failure, H_HARDWARE is simply returned.

H_INT_ESB is a special case that could have been handled under KVM
but the impact on performance was low when under QEMU. Here are some
figures :

kernel irqchip  OFF  ON
H_INT_ESBKVM   QEMU

rtl8139 (LSI )  1.19 1.24  1.23  Gbits/sec
virtio 31.8042.30   --   Gbits/sec

Signed-off-by: Cédric Le Goater 
---
 include/hw/ppc/spapr_xive.h |  15 +++
 hw/intc/spapr_xive.c|  87 +++--
 hw/intc/spapr_xive_kvm.c| 184 
 3 files changed, 278 insertions(+), 8 deletions(-)

diff --git a/include/hw/ppc/spapr_xive.h b/include/hw/ppc/spapr_xive.h
index ab6732b14a02..749c6cbc2c56 100644
--- a/include/hw/ppc/spapr_xive.h
+++ b/include/hw/ppc/spapr_xive.h
@@ -55,9 +55,24 @@ void spapr_xive_set_tctx_os_cam(XiveTCTX *tctx);
 void spapr_xive_mmio_set_enabled(sPAPRXive *xive, bool enable);
 void spapr_xive_map_mmio(sPAPRXive *xive);
 
+int spapr_xive_end_to_target(uint8_t end_blk, uint32_t end_idx,
+ uint32_t *out_server, uint8_t *out_prio);
+
 /*
  * KVM XIVE device helpers
  */
 void kvmppc_xive_connect(sPAPRXive *xive, Error **errp);
+void kvmppc_xive_reset(sPAPRXive *xive, Error **errp);
+void kvmppc_xive_set_source_config(sPAPRXive *xive, uint32_t lisn, XiveEAS 
*eas,
+   Error **errp);
+void kvmppc_xive_sync_source(sPAPRXive *xive, uint32_t lisn, Error **errp);
+uint64_t kvmppc_xive_esb_rw(XiveSource *xsrc, int srcno, uint32_t offset,
+uint64_t data, bool write);
+void kvmppc_xive_set_queue_config(sPAPRXive *xive, uint8_t end_blk,
+ uint32_t end_idx, XiveEND *end,
+ Error **errp);
+void kvmppc_xive_get_queue_config(sPAPRXive *xive, uint8_t end_blk,
+ uint32_t end_idx, XiveEND *end,
+ Error **errp);
 
 #endif /* PPC_SPAPR_XIVE_H */
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
index c24d649e3668..3db24391e31c 100644
--- a/hw/intc/spapr_xive.c
+++ b/hw/intc/spapr_xive.c
@@ -86,6 +86,19 @@ static int spapr_xive_target_to_nvt(uint32_t target,
  * sPAPR END indexing uses a simple mapping of the CPU vcpu_id, 8
  * priorities per CPU
  */
+int spapr_xive_end_to_target(uint8_t end_blk, uint32_t end_idx,
+ uint32_t *out_server, uint8_t *out_prio)
+{
+if (out_server) {
+*out_server = end_idx >> 3;
+}
+
+if (out_prio) {
+*out_prio = end_idx & 0x7;
+}
+return 0;
+}
+
 static void spapr_xive_cpu_to_end(PowerPCCPU *cpu, uint8_t prio,
   uint8_t *out_end_blk, uint32_t *out_end_idx)
 {
@@ -792,6 +805,16 @@ static target_ulong h_int_set_source_config(PowerPCCPU 
*cpu,
 new_eas.w = xive_set_field64(EAS_END_DATA, new_eas.w, eisn);
 }
 
+if (kvm_irqchip_in_kernel()) {
+Error *local_err = NULL;
+
+kvmppc_xive_set_source_config(xive, lisn, _eas, _err);
+if (local_err) {
+error_report_err(local_err);
+return H_HARDWARE;
+}
+}
+
 out:
 xive->eat[lisn] = new_eas;
 return H_SUCCESS;
@@ -1097,6 +1120,16 @@ static target_ulong h_int_set_queue_config(PowerPCCPU 
*cpu,
  */
 
 out:
+if (kvm_irqchip_in_kernel()) {
+Error *local_err = NULL;
+
+kvmppc_xive_set_queue_config(xive, end_blk, end_idx, , _err);
+if (local_err) {
+error_report_err(local_err);
+return H_HARDWARE;
+}
+}
+
 /* Update END */
 memcpy(>endt[end_idx], , sizeof(XiveEND));
 return H_SUCCESS;
@@ -1189,6 +1222,16 @@ static target_ulong h_int_get_queue_config(PowerPCCPU 
*cpu,
 args[2] = 0;
 }
 
+if (kvm_irqchip_in_kernel()) {
+Error *local_err = NULL;
+
+kvmppc_xive_get_queue_config(xive, end_blk, end_idx, end, _err);
+if (local_err) {
+error_report_err(local_err);
+return H_HARDWARE;
+}
+}
+
 /* TODO: do we need any locking on the END ? */
 if (flags & SPAPR_XIVE_END_DEBUG) {
 /* Load the event queue generation number into the return flags */
@@ -1341,15 +1384,20 @@ static target_ulong h_int_esb(PowerPCCPU *cpu,
 return H_P3;
 }
 
-mmio_addr = xive->vc_base + xive_source_esb_mgmt(xsrc, lisn) + offset;
+if (kvm_irqchip_in_kernel()) {
+args[0] = kvmppc_xive_esb_rw(xsrc, lisn, offset, data,
+ flags & SPAPR_XIVE_ESB_STORE);
+} else {
+mmio_addr = xive->vc_base + xive_source_esb_mgmt(xsrc, lisn) + offset;
 
-if (dma_memory_rw(_space_memory, mmio_addr,