Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-24 Thread Jan Kiszka
On 2012-04-24 14:59, Avi Kivity wrote:
> On 04/24/2012 03:07 PM, Jan Kiszka wrote:
>> On 2012-04-24 13:57, Avi Kivity wrote:
>>> On 03/29/2012 09:14 PM, Jan Kiszka wrote:
 Currently, MSI messages can only be injected to in-kernel irqchips by
 defining a corresponding IRQ route for each message. This is not only
 unhandy if the MSI messages are generated "on the fly" by user space,
 IRQ routes are a limited resource that user space has to manage
 carefully.

 By providing a direct injection path, we can both avoid using up limited
 resources and simplify the necessary steps for user land.


>>>
>>> Applied to queue (for 3.5).
>>>
>>> Thanks for your patience.
>>
>> Oops, that was now unexpectedly fast.
> 
> I hope you don't mean the ~ 1 month timeframe for the whole thing.

Really the last phase. I was preparing for another round of discussions.

> 
>> Extending and slightly reformatting the API docs I noticed some
>> inconsistency. Will send fixes soon. Can you fold this into my patch, or
>> just apply it on top?
>>
>>
> 
> Since it's just in queue, not next, will fold into parent patch.
> 

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-24 Thread Avi Kivity
On 04/24/2012 03:07 PM, Jan Kiszka wrote:
> On 2012-04-24 13:57, Avi Kivity wrote:
> > On 03/29/2012 09:14 PM, Jan Kiszka wrote:
> >> Currently, MSI messages can only be injected to in-kernel irqchips by
> >> defining a corresponding IRQ route for each message. This is not only
> >> unhandy if the MSI messages are generated "on the fly" by user space,
> >> IRQ routes are a limited resource that user space has to manage
> >> carefully.
> >>
> >> By providing a direct injection path, we can both avoid using up limited
> >> resources and simplify the necessary steps for user land.
> >>
> >>
> > 
> > Applied to queue (for 3.5).
> > 
> > Thanks for your patience.
>
> Oops, that was now unexpectedly fast.

I hope you don't mean the ~ 1 month timeframe for the whole thing.

> Extending and slightly reformatting the API docs I noticed some
> inconsistency. Will send fixes soon. Can you fold this into my patch, or
> just apply it on top?
>
>

Since it's just in queue, not next, will fold into parent patch.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-24 Thread Jan Kiszka
On 2012-04-24 13:57, Avi Kivity wrote:
> On 03/29/2012 09:14 PM, Jan Kiszka wrote:
>> Currently, MSI messages can only be injected to in-kernel irqchips by
>> defining a corresponding IRQ route for each message. This is not only
>> unhandy if the MSI messages are generated "on the fly" by user space,
>> IRQ routes are a limited resource that user space has to manage
>> carefully.
>>
>> By providing a direct injection path, we can both avoid using up limited
>> resources and simplify the necessary steps for user land.
>>
>>
> 
> Applied to queue (for 3.5).
> 
> Thanks for your patience.

Oops, that was now unexpectedly fast.

Extending and slightly reformatting the API docs I noticed some
inconsistency. Will send fixes soon. Can you fold this into my patch, or
just apply it on top?

Thanks,
Jan

8<

KVM: Reorder KVM_SIGNAL_MSI API documentation

4.61 is not free as two earlier sections share the same number.

Signed-off-by: Jan Kiszka 
---
 Documentation/virtual/kvm/api.txt |   42 ++--
 1 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index ed27d1b..a155221 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1482,27 +1482,6 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
target device is specified
 by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
 evaluated.
 
-4.61 KVM_SIGNAL_MSI
-
-Capability: KVM_CAP_SIGNAL_MSI
-Architectures: x86
-Type: vm ioctl
-Parameters: struct kvm_msi (in)
-Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
-
-Directly inject a MSI message. Only valid with in-kernel irqchip that handles
-MSI messages.
-
-struct kvm_msi {
-   __u32 address_lo;
-   __u32 address_hi;
-   __u32 data;
-   __u32 flags;
-   __u8  pad[16];
-};
-
-No flags are defined so far. The corresponding field must be 0.
-
 4.62 KVM_CREATE_SPAPR_TCE
 
 Capability: KVM_CAP_SPAPR_TCE
@@ -1710,6 +1689,27 @@ where the guest will clear the flag: when the soft 
lockup watchdog timer resets
 itself or when a soft lockup is detected.  This ioctl can be called any time
 after pausing the vcpu, but before it is resumed.
 
+4.71 KVM_SIGNAL_MSI
+
+Capability: KVM_CAP_SIGNAL_MSI
+Architectures: x86
+Type: vm ioctl
+Parameters: struct kvm_msi (in)
+Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
+
+Directly inject a MSI message. Only valid with in-kernel irqchip that handles
+MSI messages.
+
+struct kvm_msi {
+   __u32 address_lo;
+   __u32 address_hi;
+   __u32 data;
+   __u32 flags;
+   __u8  pad[16];
+};
+
+No flags are defined so far. The corresponding field must be 0.
+
 5. The kvm_run structure
 
 Application code obtains a pointer to the kvm_run structure by
-- 
1.7.3.4
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-24 Thread Avi Kivity
On 03/29/2012 09:14 PM, Jan Kiszka wrote:
> Currently, MSI messages can only be injected to in-kernel irqchips by
> defining a corresponding IRQ route for each message. This is not only
> unhandy if the MSI messages are generated "on the fly" by user space,
> IRQ routes are a limited resource that user space has to manage
> carefully.
>
> By providing a direct injection path, we can both avoid using up limited
> resources and simplify the necessary steps for user land.
>
>

Applied to queue (for 3.5).

Thanks for your patience.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Jan Kiszka
On 2012-04-04 13:50, Avi Kivity wrote:
> On 04/04/2012 01:48 PM, Jan Kiszka wrote:
>>>
>>> I'm not so sure anymore.  Sorry about the U turn, but remind me why?  In
>>> the long term it will be slower.
>>
>> Likely not measurably slower. If you look at a message through the arch
>> glasses, you can usually spot the destination directly, specifically if
>> a message targets a single processor - no need for hashing and table
>> lookups in the common case.
> 
> Not on x86.  The APIC ID is guest-provided.

...but is still a rather stable mapping on the physical ID.

>  In x2apic mode it can be
> quite large.

Yes, but then you can at least hash/search/cache inside that group only,
with a smaller scope.

> 
>> In contrast, the maintenance costs for the current explicit route based
>> model are significant as we see now.
>>
> 
> You mean in amount of code in userspace?  That doesn't get solved since
> we need to keep compatibility.

We do not need to track MSI origins to correlate them with routes (with
the exception of 3 special devices: vhost-based virtio, kvm device
assignment, and vfio device assignment). We emulate this centrally with
a hand full of LOC in the kvm layer, and we bypass it with the advent of
a direct injection API. Compare this to my original series that
introduced MSIRoutingCaches to cope with the current kernel API.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Avi Kivity
On 04/04/2012 01:48 PM, Jan Kiszka wrote:
> > 
> > I'm not so sure anymore.  Sorry about the U turn, but remind me why?  In
> > the long term it will be slower.
>
> Likely not measurably slower. If you look at a message through the arch
> glasses, you can usually spot the destination directly, specifically if
> a message targets a single processor - no need for hashing and table
> lookups in the common case.

Not on x86.  The APIC ID is guest-provided.  In x2apic mode it can be
quite large.

> In contrast, the maintenance costs for the current explicit route based
> model are significant as we see now.
>

You mean in amount of code in userspace?  That doesn't get solved since
we need to keep compatibility.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Jan Kiszka
On 2012-04-04 11:55, Avi Kivity wrote:
> On 04/04/2012 12:38 PM, Jan Kiszka wrote:
>> On 2012-04-04 11:36, Avi Kivity wrote:
>>> On 04/04/2012 12:22 PM, Jan Kiszka wrote:
>
>>> Until we do have this fast path we can just fill this value with zeros,
>>> so kernel patch (almost) does not need to change for this -
>>> just the header.
>>
>> Partially implemented interfaces invite breakage.
>
> Hmm true. OK scrap this idea then, it's not clear
> whether we are going to optimize this anyway.
>

 Also, the problem is that keeping that ID in userspace requires an
 infrastructure like the MSIRoutingCache that I proposed originally. Not
 much won /wrt invasiveness there. 
>>>
>>> Internal qemu refactorings are not a driver for kvm interface changes.
>>
>> No, but qemu demonstrates the applicability and handiness of the kernel
>> interfaces.
> 
> True.
> 
>>>
 So we should really do the routing
 optimization in the kernel - one day.
>>>
>>> No, we need to make a choice:
>>>
>>> explicit handles: array lookup, more expensive setup
>>> no handles: hash loopup, more expensive, but no setup, and no artificial
>>> limits
>>
>> ...and I think we should head for option 2.
> 
> I'm not so sure anymore.  Sorry about the U turn, but remind me why?  In
> the long term it will be slower.

Likely not measurably slower. If you look at a message through the arch
glasses, you can usually spot the destination directly, specifically if
a message targets a single processor - no need for hashing and table
lookups in the common case.

In contrast, the maintenance costs for the current explicit route based
model are significant as we see now.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Avi Kivity
On 04/04/2012 12:38 PM, Jan Kiszka wrote:
> On 2012-04-04 11:36, Avi Kivity wrote:
> > On 04/04/2012 12:22 PM, Jan Kiszka wrote:
> >>>
> > Until we do have this fast path we can just fill this value with zeros,
> > so kernel patch (almost) does not need to change for this -
> > just the header.
> 
>  Partially implemented interfaces invite breakage.
> >>>
> >>> Hmm true. OK scrap this idea then, it's not clear
> >>> whether we are going to optimize this anyway.
> >>>
> >>
> >> Also, the problem is that keeping that ID in userspace requires an
> >> infrastructure like the MSIRoutingCache that I proposed originally. Not
> >> much won /wrt invasiveness there. 
> > 
> > Internal qemu refactorings are not a driver for kvm interface changes.
>
> No, but qemu demonstrates the applicability and handiness of the kernel
> interfaces.

True.

> > 
> >> So we should really do the routing
> >> optimization in the kernel - one day.
> > 
> > No, we need to make a choice:
> > 
> > explicit handles: array lookup, more expensive setup
> > no handles: hash loopup, more expensive, but no setup, and no artificial
> > limits
>
> ...and I think we should head for option 2.

I'm not so sure anymore.  Sorry about the U turn, but remind me why?  In
the long term it will be slower.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Jan Kiszka
On 2012-04-04 11:36, Avi Kivity wrote:
> On 04/04/2012 12:22 PM, Jan Kiszka wrote:
>>>
> Until we do have this fast path we can just fill this value with zeros,
> so kernel patch (almost) does not need to change for this -
> just the header.

 Partially implemented interfaces invite breakage.
>>>
>>> Hmm true. OK scrap this idea then, it's not clear
>>> whether we are going to optimize this anyway.
>>>
>>
>> Also, the problem is that keeping that ID in userspace requires an
>> infrastructure like the MSIRoutingCache that I proposed originally. Not
>> much won /wrt invasiveness there. 
> 
> Internal qemu refactorings are not a driver for kvm interface changes.

No, but qemu demonstrates the applicability and handiness of the kernel
interfaces.

> 
>> So we should really do the routing
>> optimization in the kernel - one day.
> 
> No, we need to make a choice:
> 
> explicit handles: array lookup, more expensive setup
> no handles: hash loopup, more expensive, but no setup, and no artificial
> limits

...and I think we should head for option 2.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Avi Kivity
On 04/04/2012 12:22 PM, Jan Kiszka wrote:
> > 
> >>> Until we do have this fast path we can just fill this value with zeros,
> >>> so kernel patch (almost) does not need to change for this -
> >>> just the header.
> >>
> >> Partially implemented interfaces invite breakage.
> > 
> > Hmm true. OK scrap this idea then, it's not clear
> > whether we are going to optimize this anyway.
> > 
>
> Also, the problem is that keeping that ID in userspace requires an
> infrastructure like the MSIRoutingCache that I proposed originally. Not
> much won /wrt invasiveness there. 

Internal qemu refactorings are not a driver for kvm interface changes.

> So we should really do the routing
> optimization in the kernel - one day.

No, we need to make a choice:

explicit handles: array lookup, more expensive setup
no handles: hash loopup, more expensive, but no setup, and no artificial
limits

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Jan Kiszka
On 2012-04-04 10:53, Michael S. Tsirkin wrote:
> On Wed, Apr 04, 2012 at 11:44:23AM +0300, Avi Kivity wrote:
>> On 04/04/2012 11:38 AM, Michael S. Tsirkin wrote:

>
> A performance note: delivering an interrupt needs to search all vcpus
> for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> this lookup in the irq routing table.  Now it looks like we'll need a
> separate cache for this.

 As this is non-existent until today, we don't regress here. And it can
 still be added on top later on, transparently.
>>>
>>> I always worry about hash collisions and the cost of
>>> calculating good hash functions.
>>>
>>> We could instead return an index in the cache on injection, maintain in
>>> userspace and use it for fast path on the next injection.
>>
>> Ahem, that is almost the existing routing table to a T.
>>
>>> Will make it easy to use an array index instead of a hash here,
>>> and fallback to a slower ID lookup on mismatch.
>>
>> Need a free ioctl so we can reuse IDs.
> 
> No, it could be kernel controlled not userspace controlled. We get both
> and address and an index:
> 
> if (table[u.i].addr == u.addr && table[u.i].data == u.data) {
>   return table[u.i].id;
> }
> 
> u.i = find_lru_idx(&table);
> table[u.i].addr = u.addr;
> table[u.i].data = u.data;
> table[u.i].id = find_id(u.addr, u.data);
> return table[u.i].id;
> 
> 
>>> Until we do have this fast path we can just fill this value with zeros,
>>> so kernel patch (almost) does not need to change for this -
>>> just the header.
>>
>> Partially implemented interfaces invite breakage.
> 
> Hmm true. OK scrap this idea then, it's not clear
> whether we are going to optimize this anyway.
> 

Also, the problem is that keeping that ID in userspace requires an
infrastructure like the MSIRoutingCache that I proposed originally. Not
much won /wrt invasiveness there. So we should really do the routing
optimization in the kernel - one day.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Michael S. Tsirkin
On Wed, Apr 04, 2012 at 11:44:23AM +0300, Avi Kivity wrote:
> On 04/04/2012 11:38 AM, Michael S. Tsirkin wrote:
> > > 
> > > > 
> > > > A performance note: delivering an interrupt needs to search all vcpus
> > > > for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> > > > this lookup in the irq routing table.  Now it looks like we'll need a
> > > > separate cache for this.
> > > 
> > > As this is non-existent until today, we don't regress here. And it can
> > > still be added on top later on, transparently.
> >
> > I always worry about hash collisions and the cost of
> > calculating good hash functions.
> >
> > We could instead return an index in the cache on injection, maintain in
> > userspace and use it for fast path on the next injection.
> 
> Ahem, that is almost the existing routing table to a T.
> 
> > Will make it easy to use an array index instead of a hash here,
> > and fallback to a slower ID lookup on mismatch.
> 
> Need a free ioctl so we can reuse IDs.

No, it could be kernel controlled not userspace controlled. We get both
and address and an index:

if (table[u.i].addr == u.addr && table[u.i].data == u.data) {
return table[u.i].id;
}

u.i = find_lru_idx(&table);
table[u.i].addr = u.addr;
table[u.i].data = u.data;
table[u.i].id = find_id(u.addr, u.data);
return table[u.i].id;


> > Until we do have this fast path we can just fill this value with zeros,
> > so kernel patch (almost) does not need to change for this -
> > just the header.
> 
> Partially implemented interfaces invite breakage.

Hmm true. OK scrap this idea then, it's not clear
whether we are going to optimize this anyway.

> 
> -- 
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Avi Kivity
On 04/03/2012 08:24 PM, Jan Kiszka wrote:
> > 
> >>>
> >>> A performance note: delivering an interrupt needs to search all vcpus
> >>> for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> >>> this lookup in the irq routing table.  Now it looks like we'll need a
> >>> separate cache for this.
> >>
> >> As this is non-existent until today, we don't regress here. And it can
> >> still be added on top later on, transparently.
> > 
> > Yes, it's just a note, not an objection.  The cache lookup will be
> > slower than the gsi lookup (hash table vs. array) but still O(1) vs. the
> > current O(n).
>
> If you are concerned about performance in this path, wouldn't a DMA
> interface for MSI injection be counterproductive?

Yes, it would.  The lack of coalescing reporting support is also
problematic.  I just mentioned this idea as food for thought.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Avi Kivity
On 04/04/2012 11:38 AM, Michael S. Tsirkin wrote:
> > 
> > > 
> > > A performance note: delivering an interrupt needs to search all vcpus
> > > for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> > > this lookup in the irq routing table.  Now it looks like we'll need a
> > > separate cache for this.
> > 
> > As this is non-existent until today, we don't regress here. And it can
> > still be added on top later on, transparently.
>
> I always worry about hash collisions and the cost of
> calculating good hash functions.
>
> We could instead return an index in the cache on injection, maintain in
> userspace and use it for fast path on the next injection.

Ahem, that is almost the existing routing table to a T.

> Will make it easy to use an array index instead of a hash here,
> and fallback to a slower ID lookup on mismatch.

Need a free ioctl so we can reuse IDs.

> Until we do have this fast path we can just fill this value with zeros,
> so kernel patch (almost) does not need to change for this -
> just the header.

Partially implemented interfaces invite breakage.


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-04 Thread Michael S. Tsirkin
On Tue, Apr 03, 2012 at 06:47:49PM +0200, Jan Kiszka wrote:
> On 2012-04-03 18:27, Avi Kivity wrote:
> > On 03/29/2012 09:14 PM, Jan Kiszka wrote:
> >> Currently, MSI messages can only be injected to in-kernel irqchips by
> >> defining a corresponding IRQ route for each message. This is not only
> >> unhandy if the MSI messages are generated "on the fly" by user space,
> >> IRQ routes are a limited resource that user space has to manage
> >> carefully.
> >>
> >> By providing a direct injection path, we can both avoid using up limited
> >> resources and simplify the necessary steps for user land.
> >>
> >> diff --git a/Documentation/virtual/kvm/api.txt 
> >> b/Documentation/virtual/kvm/api.txt
> >> index 81ff39f..ed27d1b 100644
> >> --- a/Documentation/virtual/kvm/api.txt
> >> +++ b/Documentation/virtual/kvm/api.txt
> >> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
> >> target device is specified
> >>  by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
> >>  evaluated.
> >>  
> >> +4.61 KVM_SIGNAL_MSI
> >> +
> >> +Capability: KVM_CAP_SIGNAL_MSI
> >> +Architectures: x86
> >> +Type: vm ioctl
> >> +Parameters: struct kvm_msi (in)
> >> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
> >> +
> >> +Directly inject a MSI message. Only valid with in-kernel irqchip that 
> >> handles
> >> +MSI messages.
> >> +
> >> +struct kvm_msi {
> >> +  __u32 address_lo;
> >> +  __u32 address_hi;
> >> +  __u32 data;
> >> +  __u32 flags;
> >> +  __u8  pad[16];
> >> +};
> >> +
> >> +No flags are defined so far. The corresponding field must be 0.
> >>
> > 
> > There are two ways in which this can be generalized:
> > 
> > struct kvm_general_irq {
> >   __u32 type; // line | MSI
> >   __u32 op;  // raise/lower/trigger
> >   union {
> >  ... line;
> >  struct kvm_msi msi;
> >   }
> > };
> > 
> > so we have a single ioctl for all interrupt handling.  This allows
> > eventual removal of the line-oriented ioctls.
> > 
> > The other alternative is to have a dma interface, similar to the kvm_run
> > mmio interface but with the kernel acting as destination.  The advantage
> > here is that we can handle dma from a device to any kernel-emulated
> > device, not just the APIC MSI range.  A downside is that we can't return
> > values related to interrupt coalescing.
> 
> Due to lacking injection feedback, I'm in favor of option 1. Will have a
> look.
> 
> > 
> > A performance note: delivering an interrupt needs to search all vcpus
> > for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> > this lookup in the irq routing table.  Now it looks like we'll need a
> > separate cache for this.
> 
> As this is non-existent until today, we don't regress here. And it can
> still be added on top later on, transparently.

I always worry about hash collisions and the cost of
calculating good hash functions.

We could instead return an index in the cache on injection, maintain in
userspace and use it for fast path on the next injection.
Will make it easy to use an array index instead of a hash here,
and fallback to a slower ID lookup on mismatch.

Until we do have this fast path we can just fill this value with zeros,
so kernel patch (almost) does not need to change for this -
just the header.

> > 
> > (yes, I said on the call I don't anticipate objections but preparing to
> > apply a patch always triggers more critical thinking)
> > 
> 
> Well, we make progress, though slower than I was hoping. :)
> 
> Jan
> 
> -- 
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-03 Thread Jan Kiszka
On 2012-04-03 18:54, Avi Kivity wrote:
> On 04/03/2012 07:47 PM, Jan Kiszka wrote:
>>>
>>> so we have a single ioctl for all interrupt handling.  This allows
>>> eventual removal of the line-oriented ioctls.
>>>
>>> The other alternative is to have a dma interface, similar to the kvm_run
>>> mmio interface but with the kernel acting as destination.  The advantage
>>> here is that we can handle dma from a device to any kernel-emulated
>>> device, not just the APIC MSI range.  A downside is that we can't return
>>> values related to interrupt coalescing.
>>
>> Due to lacking injection feedback, I'm in favor of option 1. Will have a
>> look.
> 
> I wonder if we can create a side channel for it.  Lack of a kernel DMA
> API is a hole in the current code, though we haven't been bitten by it
> yet.  An example is a guest that is swapping its own page tables; right
> now the shadow mmu doesn't notice those writes (when the page tables are
> swapped in) and will deliver incorrect results.  Of course no guest does
> that, so it doesn't happen in practice.
> 
>>>
>>> A performance note: delivering an interrupt needs to search all vcpus
>>> for an APIC ID match.  The previous plan was to cache (or pre-calculate)
>>> this lookup in the irq routing table.  Now it looks like we'll need a
>>> separate cache for this.
>>
>> As this is non-existent until today, we don't regress here. And it can
>> still be added on top later on, transparently.
> 
> Yes, it's just a note, not an objection.  The cache lookup will be
> slower than the gsi lookup (hash table vs. array) but still O(1) vs. the
> current O(n).

If you are concerned about performance in this path, wouldn't a DMA
interface for MSI injection be counterproductive?

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-03 Thread Avi Kivity
On 04/03/2012 07:47 PM, Jan Kiszka wrote:
> > 
> > so we have a single ioctl for all interrupt handling.  This allows
> > eventual removal of the line-oriented ioctls.
> > 
> > The other alternative is to have a dma interface, similar to the kvm_run
> > mmio interface but with the kernel acting as destination.  The advantage
> > here is that we can handle dma from a device to any kernel-emulated
> > device, not just the APIC MSI range.  A downside is that we can't return
> > values related to interrupt coalescing.
>
> Due to lacking injection feedback, I'm in favor of option 1. Will have a
> look.

I wonder if we can create a side channel for it.  Lack of a kernel DMA
API is a hole in the current code, though we haven't been bitten by it
yet.  An example is a guest that is swapping its own page tables; right
now the shadow mmu doesn't notice those writes (when the page tables are
swapped in) and will deliver incorrect results.  Of course no guest does
that, so it doesn't happen in practice.

> > 
> > A performance note: delivering an interrupt needs to search all vcpus
> > for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> > this lookup in the irq routing table.  Now it looks like we'll need a
> > separate cache for this.
>
> As this is non-existent until today, we don't regress here. And it can
> still be added on top later on, transparently.

Yes, it's just a note, not an objection.  The cache lookup will be
slower than the gsi lookup (hash table vs. array) but still O(1) vs. the
current O(n).

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-03 Thread Jan Kiszka
On 2012-04-03 18:27, Avi Kivity wrote:
> On 03/29/2012 09:14 PM, Jan Kiszka wrote:
>> Currently, MSI messages can only be injected to in-kernel irqchips by
>> defining a corresponding IRQ route for each message. This is not only
>> unhandy if the MSI messages are generated "on the fly" by user space,
>> IRQ routes are a limited resource that user space has to manage
>> carefully.
>>
>> By providing a direct injection path, we can both avoid using up limited
>> resources and simplify the necessary steps for user land.
>>
>> diff --git a/Documentation/virtual/kvm/api.txt 
>> b/Documentation/virtual/kvm/api.txt
>> index 81ff39f..ed27d1b 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
>> target device is specified
>>  by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
>>  evaluated.
>>  
>> +4.61 KVM_SIGNAL_MSI
>> +
>> +Capability: KVM_CAP_SIGNAL_MSI
>> +Architectures: x86
>> +Type: vm ioctl
>> +Parameters: struct kvm_msi (in)
>> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
>> +
>> +Directly inject a MSI message. Only valid with in-kernel irqchip that 
>> handles
>> +MSI messages.
>> +
>> +struct kvm_msi {
>> +__u32 address_lo;
>> +__u32 address_hi;
>> +__u32 data;
>> +__u32 flags;
>> +__u8  pad[16];
>> +};
>> +
>> +No flags are defined so far. The corresponding field must be 0.
>>
> 
> There are two ways in which this can be generalized:
> 
> struct kvm_general_irq {
>   __u32 type; // line | MSI
>   __u32 op;  // raise/lower/trigger
>   union {
>  ... line;
>  struct kvm_msi msi;
>   }
> };
> 
> so we have a single ioctl for all interrupt handling.  This allows
> eventual removal of the line-oriented ioctls.
> 
> The other alternative is to have a dma interface, similar to the kvm_run
> mmio interface but with the kernel acting as destination.  The advantage
> here is that we can handle dma from a device to any kernel-emulated
> device, not just the APIC MSI range.  A downside is that we can't return
> values related to interrupt coalescing.

Due to lacking injection feedback, I'm in favor of option 1. Will have a
look.

> 
> A performance note: delivering an interrupt needs to search all vcpus
> for an APIC ID match.  The previous plan was to cache (or pre-calculate)
> this lookup in the irq routing table.  Now it looks like we'll need a
> separate cache for this.

As this is non-existent until today, we don't regress here. And it can
still be added on top later on, transparently.

> 
> (yes, I said on the call I don't anticipate objections but preparing to
> apply a patch always triggers more critical thinking)
> 

Well, we make progress, though slower than I was hoping. :)

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-04-03 Thread Avi Kivity
On 03/29/2012 09:14 PM, Jan Kiszka wrote:
> Currently, MSI messages can only be injected to in-kernel irqchips by
> defining a corresponding IRQ route for each message. This is not only
> unhandy if the MSI messages are generated "on the fly" by user space,
> IRQ routes are a limited resource that user space has to manage
> carefully.
>
> By providing a direct injection path, we can both avoid using up limited
> resources and simplify the necessary steps for user land.
>
> diff --git a/Documentation/virtual/kvm/api.txt 
> b/Documentation/virtual/kvm/api.txt
> index 81ff39f..ed27d1b 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
> target device is specified
>  by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
>  evaluated.
>  
> +4.61 KVM_SIGNAL_MSI
> +
> +Capability: KVM_CAP_SIGNAL_MSI
> +Architectures: x86
> +Type: vm ioctl
> +Parameters: struct kvm_msi (in)
> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
> +
> +Directly inject a MSI message. Only valid with in-kernel irqchip that handles
> +MSI messages.
> +
> +struct kvm_msi {
> + __u32 address_lo;
> + __u32 address_hi;
> + __u32 data;
> + __u32 flags;
> + __u8  pad[16];
> +};
> +
> +No flags are defined so far. The corresponding field must be 0.
>

There are two ways in which this can be generalized:

struct kvm_general_irq {
  __u32 type; // line | MSI
  __u32 op;  // raise/lower/trigger
  union {
 ... line;
 struct kvm_msi msi;
  }
};

so we have a single ioctl for all interrupt handling.  This allows
eventual removal of the line-oriented ioctls.

The other alternative is to have a dma interface, similar to the kvm_run
mmio interface but with the kernel acting as destination.  The advantage
here is that we can handle dma from a device to any kernel-emulated
device, not just the APIC MSI range.  A downside is that we can't return
values related to interrupt coalescing.

A performance note: delivering an interrupt needs to search all vcpus
for an APIC ID match.  The previous plan was to cache (or pre-calculate)
this lookup in the irq routing table.  Now it looks like we'll need a
separate cache for this.

(yes, I said on the call I don't anticipate objections but preparing to
apply a patch always triggers more critical thinking)

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-03-30 Thread Michael S. Tsirkin
On Fri, Mar 30, 2012 at 09:45:35AM +0200, Jan Kiszka wrote:
> On 2012-03-29 21:41, Michael S. Tsirkin wrote:
> > On Thu, Mar 29, 2012 at 09:14:12PM +0200, Jan Kiszka wrote:
> >> Currently, MSI messages can only be injected to in-kernel irqchips by
> >> defining a corresponding IRQ route for each message. This is not only
> >> unhandy if the MSI messages are generated "on the fly" by user space,
> >> IRQ routes are a limited resource that user space has to manage
> >> carefully.
> >>
> >> By providing a direct injection path, we can both avoid using up limited
> >> resources and simplify the necessary steps for user land.
> >>
> >> Signed-off-by: Jan Kiszka 
> >> ---
> >>
> >> Changes in v4:
> >>  - Fix the build by factoring out kvm_send_userspace_msi:
> >>irqchip_in_kernel is not generically available. But abstracting it
> >>for all arch is tricky and therefore left to the poor people who have
> >>to introduce non-x86 irqchip support to this x86-focused corner.
> >>
> >> Lesson (probably not) learned: Never underestimate the complexity of
> >> trivial changes.
> >>
> >>  Documentation/virtual/kvm/api.txt |   21 +
> >>  arch/x86/kvm/Kconfig  |1 +
> >>  include/linux/kvm.h   |   11 +++
> >>  include/linux/kvm_host.h  |2 ++
> >>  virt/kvm/Kconfig  |3 +++
> >>  virt/kvm/irq_comm.c   |   14 ++
> >>  virt/kvm/kvm_main.c   |   14 ++
> >>  7 files changed, 66 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/Documentation/virtual/kvm/api.txt 
> >> b/Documentation/virtual/kvm/api.txt
> >> index 81ff39f..ed27d1b 100644
> >> --- a/Documentation/virtual/kvm/api.txt
> >> +++ b/Documentation/virtual/kvm/api.txt
> >> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
> >> target device is specified
> >>  by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
> >>  evaluated.
> >>  
> >> +4.61 KVM_SIGNAL_MSI
> >> +
> >> +Capability: KVM_CAP_SIGNAL_MSI
> >> +Architectures: x86
> >> +Type: vm ioctl
> >> +Parameters: struct kvm_msi (in)
> >> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
> >> +
> >> +Directly inject a MSI message. Only valid with in-kernel irqchip that 
> >> handles
> >> +MSI messages.
> >> +
> >> +struct kvm_msi {
> >> +  __u32 address_lo;
> >> +  __u32 address_hi;
> >> +  __u32 data;
> >> +  __u32 flags;
> >> +  __u8  pad[16];
> >> +};
> >> +
> >> +No flags are defined so far. The corresponding field must be 0.
> >> +
> >>  4.62 KVM_CREATE_SPAPR_TCE
> >>  
> >>  Capability: KVM_CAP_SPAPR_TCE
> >> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> >> index 1a7fe86..a28f338 100644
> >> --- a/arch/x86/kvm/Kconfig
> >> +++ b/arch/x86/kvm/Kconfig
> >> @@ -36,6 +36,7 @@ config KVM
> >>select TASKSTATS
> >>select TASK_DELAY_ACCT
> >>select PERF_EVENTS
> >> +  select HAVE_KVM_MSI
> >>---help---
> >>  Support hosting fully virtualized guest machines using hardware
> >>  virtualization extensions.  You will need a fairly recent
> >> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> >> index 7a9dd4b..225b452 100644
> >> --- a/include/linux/kvm.h
> >> +++ b/include/linux/kvm.h
> >> @@ -590,6 +590,7 @@ struct kvm_ppc_pvinfo {
> >>  #define KVM_CAP_SYNC_REGS 74
> >>  #define KVM_CAP_PCI_2_3 75
> >>  #define KVM_CAP_KVMCLOCK_CTRL 76
> >> +#define KVM_CAP_SIGNAL_MSI 77
> >>  
> >>  #ifdef KVM_CAP_IRQ_ROUTING
> >>  
> >> @@ -715,6 +716,14 @@ struct kvm_one_reg {
> >>__u64 addr;
> >>  };
> >>  
> >> +struct kvm_msi {
> >> +  __u32 address_lo;
> >> +  __u32 address_hi;
> >> +  __u32 data;
> >> +  __u32 flags;
> >> +  __u8  pad[16];
> >> +};
> >> +
> >>  /*
> >>   * ioctls for VM fds
> >>   */
> >> @@ -789,6 +798,8 @@ struct kvm_s390_ucas_mapping {
> >>  /* Available with KVM_CAP_PCI_2_3 */
> >>  #define KVM_ASSIGN_SET_INTX_MASK  _IOW(KVMIO,  0xa4, \
> >>   struct kvm_assigned_pci_dev)
> >> +/* Available with KVM_CAP_SIGNAL_MSI */
> >> +#define KVM_SIGNAL_MSI_IOW(KVMIO,  0xa5, struct kvm_msi)
> >>  
> >>  /*
> >>   * ioctls for vcpu fds
> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >> index 6cf158c..35c69d6 100644
> >> --- a/include/linux/kvm_host.h
> >> +++ b/include/linux/kvm_host.h
> >> @@ -774,6 +774,8 @@ int kvm_set_irq_routing(struct kvm *kvm,
> >>unsigned flags);
> >>  void kvm_free_irq_routing(struct kvm *kvm);
> >>  
> >> +int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi);
> >> +
> >>  #else
> >>  
> >>  static inline void kvm_free_irq_routing(struct kvm *kvm) {}
> >> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> >> index f63ccb0..28694f4 100644
> >> --- a/virt/kvm/Kconfig
> >> +++ b/virt/kvm/Kconfig
> >> @@ -18,3 +18,6 @@ config KVM_MMIO
> >>  
> >>  config KVM_ASYNC_PF
> >> bool
> >> +
> >> +config HAVE_KVM_MSI
> >> +   bool
> >> diff --g

Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-03-30 Thread Jan Kiszka
On 2012-03-29 21:41, Michael S. Tsirkin wrote:
> On Thu, Mar 29, 2012 at 09:14:12PM +0200, Jan Kiszka wrote:
>> Currently, MSI messages can only be injected to in-kernel irqchips by
>> defining a corresponding IRQ route for each message. This is not only
>> unhandy if the MSI messages are generated "on the fly" by user space,
>> IRQ routes are a limited resource that user space has to manage
>> carefully.
>>
>> By providing a direct injection path, we can both avoid using up limited
>> resources and simplify the necessary steps for user land.
>>
>> Signed-off-by: Jan Kiszka 
>> ---
>>
>> Changes in v4:
>>  - Fix the build by factoring out kvm_send_userspace_msi:
>>irqchip_in_kernel is not generically available. But abstracting it
>>for all arch is tricky and therefore left to the poor people who have
>>to introduce non-x86 irqchip support to this x86-focused corner.
>>
>> Lesson (probably not) learned: Never underestimate the complexity of
>> trivial changes.
>>
>>  Documentation/virtual/kvm/api.txt |   21 +
>>  arch/x86/kvm/Kconfig  |1 +
>>  include/linux/kvm.h   |   11 +++
>>  include/linux/kvm_host.h  |2 ++
>>  virt/kvm/Kconfig  |3 +++
>>  virt/kvm/irq_comm.c   |   14 ++
>>  virt/kvm/kvm_main.c   |   14 ++
>>  7 files changed, 66 insertions(+), 0 deletions(-)
>>
>> diff --git a/Documentation/virtual/kvm/api.txt 
>> b/Documentation/virtual/kvm/api.txt
>> index 81ff39f..ed27d1b 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
>> target device is specified
>>  by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
>>  evaluated.
>>  
>> +4.61 KVM_SIGNAL_MSI
>> +
>> +Capability: KVM_CAP_SIGNAL_MSI
>> +Architectures: x86
>> +Type: vm ioctl
>> +Parameters: struct kvm_msi (in)
>> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
>> +
>> +Directly inject a MSI message. Only valid with in-kernel irqchip that 
>> handles
>> +MSI messages.
>> +
>> +struct kvm_msi {
>> +__u32 address_lo;
>> +__u32 address_hi;
>> +__u32 data;
>> +__u32 flags;
>> +__u8  pad[16];
>> +};
>> +
>> +No flags are defined so far. The corresponding field must be 0.
>> +
>>  4.62 KVM_CREATE_SPAPR_TCE
>>  
>>  Capability: KVM_CAP_SPAPR_TCE
>> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
>> index 1a7fe86..a28f338 100644
>> --- a/arch/x86/kvm/Kconfig
>> +++ b/arch/x86/kvm/Kconfig
>> @@ -36,6 +36,7 @@ config KVM
>>  select TASKSTATS
>>  select TASK_DELAY_ACCT
>>  select PERF_EVENTS
>> +select HAVE_KVM_MSI
>>  ---help---
>>Support hosting fully virtualized guest machines using hardware
>>virtualization extensions.  You will need a fairly recent
>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>> index 7a9dd4b..225b452 100644
>> --- a/include/linux/kvm.h
>> +++ b/include/linux/kvm.h
>> @@ -590,6 +590,7 @@ struct kvm_ppc_pvinfo {
>>  #define KVM_CAP_SYNC_REGS 74
>>  #define KVM_CAP_PCI_2_3 75
>>  #define KVM_CAP_KVMCLOCK_CTRL 76
>> +#define KVM_CAP_SIGNAL_MSI 77
>>  
>>  #ifdef KVM_CAP_IRQ_ROUTING
>>  
>> @@ -715,6 +716,14 @@ struct kvm_one_reg {
>>  __u64 addr;
>>  };
>>  
>> +struct kvm_msi {
>> +__u32 address_lo;
>> +__u32 address_hi;
>> +__u32 data;
>> +__u32 flags;
>> +__u8  pad[16];
>> +};
>> +
>>  /*
>>   * ioctls for VM fds
>>   */
>> @@ -789,6 +798,8 @@ struct kvm_s390_ucas_mapping {
>>  /* Available with KVM_CAP_PCI_2_3 */
>>  #define KVM_ASSIGN_SET_INTX_MASK  _IOW(KVMIO,  0xa4, \
>> struct kvm_assigned_pci_dev)
>> +/* Available with KVM_CAP_SIGNAL_MSI */
>> +#define KVM_SIGNAL_MSI_IOW(KVMIO,  0xa5, struct kvm_msi)
>>  
>>  /*
>>   * ioctls for vcpu fds
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 6cf158c..35c69d6 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -774,6 +774,8 @@ int kvm_set_irq_routing(struct kvm *kvm,
>>  unsigned flags);
>>  void kvm_free_irq_routing(struct kvm *kvm);
>>  
>> +int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi);
>> +
>>  #else
>>  
>>  static inline void kvm_free_irq_routing(struct kvm *kvm) {}
>> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
>> index f63ccb0..28694f4 100644
>> --- a/virt/kvm/Kconfig
>> +++ b/virt/kvm/Kconfig
>> @@ -18,3 +18,6 @@ config KVM_MMIO
>>  
>>  config KVM_ASYNC_PF
>> bool
>> +
>> +config HAVE_KVM_MSI
>> +   bool
>> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
>> index 9f614b4..a6a0365 100644
>> --- a/virt/kvm/irq_comm.c
>> +++ b/virt/kvm/irq_comm.c
>> @@ -138,6 +138,20 @@ int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e,
>>  return kvm_irq_delivery_to_apic(kvm, NULL, &irq);
>>  }
>>  
>> +int kvm_s

Re: [PATCH v4] KVM: Introduce direct MSI message injection for in-kernel irqchips

2012-03-29 Thread Michael S. Tsirkin
On Thu, Mar 29, 2012 at 09:14:12PM +0200, Jan Kiszka wrote:
> Currently, MSI messages can only be injected to in-kernel irqchips by
> defining a corresponding IRQ route for each message. This is not only
> unhandy if the MSI messages are generated "on the fly" by user space,
> IRQ routes are a limited resource that user space has to manage
> carefully.
> 
> By providing a direct injection path, we can both avoid using up limited
> resources and simplify the necessary steps for user land.
> 
> Signed-off-by: Jan Kiszka 
> ---
> 
> Changes in v4:
>  - Fix the build by factoring out kvm_send_userspace_msi:
>irqchip_in_kernel is not generically available. But abstracting it
>for all arch is tricky and therefore left to the poor people who have
>to introduce non-x86 irqchip support to this x86-focused corner.
> 
> Lesson (probably not) learned: Never underestimate the complexity of
> trivial changes.
> 
>  Documentation/virtual/kvm/api.txt |   21 +
>  arch/x86/kvm/Kconfig  |1 +
>  include/linux/kvm.h   |   11 +++
>  include/linux/kvm_host.h  |2 ++
>  virt/kvm/Kconfig  |3 +++
>  virt/kvm/irq_comm.c   |   14 ++
>  virt/kvm/kvm_main.c   |   14 ++
>  7 files changed, 66 insertions(+), 0 deletions(-)
> 
> diff --git a/Documentation/virtual/kvm/api.txt 
> b/Documentation/virtual/kvm/api.txt
> index 81ff39f..ed27d1b 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure.  The 
> target device is specified
>  by assigned_dev_id.  In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
>  evaluated.
>  
> +4.61 KVM_SIGNAL_MSI
> +
> +Capability: KVM_CAP_SIGNAL_MSI
> +Architectures: x86
> +Type: vm ioctl
> +Parameters: struct kvm_msi (in)
> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
> +
> +Directly inject a MSI message. Only valid with in-kernel irqchip that handles
> +MSI messages.
> +
> +struct kvm_msi {
> + __u32 address_lo;
> + __u32 address_hi;
> + __u32 data;
> + __u32 flags;
> + __u8  pad[16];
> +};
> +
> +No flags are defined so far. The corresponding field must be 0.
> +
>  4.62 KVM_CREATE_SPAPR_TCE
>  
>  Capability: KVM_CAP_SPAPR_TCE
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index 1a7fe86..a28f338 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -36,6 +36,7 @@ config KVM
>   select TASKSTATS
>   select TASK_DELAY_ACCT
>   select PERF_EVENTS
> + select HAVE_KVM_MSI
>   ---help---
> Support hosting fully virtualized guest machines using hardware
> virtualization extensions.  You will need a fairly recent
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 7a9dd4b..225b452 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -590,6 +590,7 @@ struct kvm_ppc_pvinfo {
>  #define KVM_CAP_SYNC_REGS 74
>  #define KVM_CAP_PCI_2_3 75
>  #define KVM_CAP_KVMCLOCK_CTRL 76
> +#define KVM_CAP_SIGNAL_MSI 77
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> @@ -715,6 +716,14 @@ struct kvm_one_reg {
>   __u64 addr;
>  };
>  
> +struct kvm_msi {
> + __u32 address_lo;
> + __u32 address_hi;
> + __u32 data;
> + __u32 flags;
> + __u8  pad[16];
> +};
> +
>  /*
>   * ioctls for VM fds
>   */
> @@ -789,6 +798,8 @@ struct kvm_s390_ucas_mapping {
>  /* Available with KVM_CAP_PCI_2_3 */
>  #define KVM_ASSIGN_SET_INTX_MASK  _IOW(KVMIO,  0xa4, \
>  struct kvm_assigned_pci_dev)
> +/* Available with KVM_CAP_SIGNAL_MSI */
> +#define KVM_SIGNAL_MSI_IOW(KVMIO,  0xa5, struct kvm_msi)
>  
>  /*
>   * ioctls for vcpu fds
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6cf158c..35c69d6 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -774,6 +774,8 @@ int kvm_set_irq_routing(struct kvm *kvm,
>   unsigned flags);
>  void kvm_free_irq_routing(struct kvm *kvm);
>  
> +int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi);
> +
>  #else
>  
>  static inline void kvm_free_irq_routing(struct kvm *kvm) {}
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index f63ccb0..28694f4 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -18,3 +18,6 @@ config KVM_MMIO
>  
>  config KVM_ASYNC_PF
> bool
> +
> +config HAVE_KVM_MSI
> +   bool
> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> index 9f614b4..a6a0365 100644
> --- a/virt/kvm/irq_comm.c
> +++ b/virt/kvm/irq_comm.c
> @@ -138,6 +138,20 @@ int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e,
>   return kvm_irq_delivery_to_apic(kvm, NULL, &irq);
>  }
>  
> +int kvm_send_userspace_msi(struct kvm *kvm, struct kvm_msi *msi)
> +{
> + struct kvm_kernel_irq_routing_entry route;
> +
> + if (!irqchip_in_kernel(kvm) || msi->flags != 0)