Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-11 Thread malc
On Thu, 11 Mar 2010, Nick Piggin wrote:

> On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
> > Paul Brook wrote:
> > > > > In a cross environment that becomes extremely hairy.  For example the 
> > > > > x86
> > > > > architecture effectively has an implicit write barrier before every
> > > > > store, and an implicit read barrier before every load.
> > > > 
> > > > Btw, x86 doesn't have any implicit barriers due to ordinary loads.
> > > > Only stores and atomics have implicit barriers, afaik.
> > > 
> > > As of March 2009[1] Intel guarantees that memory reads occur in
> > > order (they may only be reordered relative to writes). It appears
> > > AMD do not provide this guarantee, which could be an interesting
> > > problem for heterogeneous migration..
> > 
> > (Summary: At least on AMD64, it does too, for normal accesses to
> > naturally aligned addresses in write-back cacheable memory.)
> > 
> > Oh, that's interesting.  Way back when I guess we knew writes were in
> > order and it wasn't explicit that reads were, hence smp_rmb() using a
> > locked atomic.
> > 
> > Here is a post by Nick Piggin from 2007 with links to Intel _and_ AMD
> > documents asserting that reads to cacheable memory are in program order:
> > 
> > http://lkml.org/lkml/2007/9/28/212
> > Subject: [patch] x86: improved memory barrier implementation
> > 
> > Links to documents:
> > 
> > http://developer.intel.com/products/processor/manuals/318147.pdf
> > 
> > http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf
> > 
> > The Intel link doesn't work any more, but the AMD one does.
> 
> It might have been merged into their development manual now.

It was (http://www.intel.com/products/processor/manuals/):

Intel╝ 64 Architecture Memory Ordering White Paper

This document has been merged into Volume 3A of Intel 64 and IA-32 
Architectures Software Developer's Manual.

[..snip..]

-- 
mailto:av1...@comtv.ru

Re: [PATCH] Inter-VM shared memory PCI device

2010-03-11 Thread Arnd Bergmann
On Thursday 11 March 2010, Avi Kivity wrote:
> >> That would be much slower.  The current scheme allows for an
> >> ioeventfd/irqfd short circuit which allows one guest to interrupt
> >> another without involving their qemus at all.
> >>  
> > Yes, the serial line approach would be much slower, but my point
> > was that we can do signaling over "something else", which could
> > well be something building on irqfd.
> 
> Well, we could, but it seems to make things more complicated?  A card 
> with shared memory, and another card with an interrupt interconnect?

Yes, I agree that it's more complicated if you have a specific application
in mind that needs one of each, and most use cases that want shared memory
also need an interrupt mechanism, but it's not always the case:

- You could use ext2 with -o xip on a private mapping of a shared host file
in order to share the page cache. This does not need any interrupts.

- If you have more than two parties sharing the segment, there are different
ways to communicate, e.g. always send an interrupt to all others, or have
dedicated point-to-point connections. There is also some complexity in
trying to cover all possible cases in one driver.

I have to say that I also really like the idea of futex over shared memory,
which could potentially make this all a lot simpler. I don't know how this
would best be implemented on the host though.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-11 Thread Avi Kivity

On 03/11/2010 02:57 PM, Arnd Bergmann wrote:

On Thursday 11 March 2010, Avi Kivity wrote:
   

A totally different option that avoids this whole problem would
be to separate the signalling from the shared memory, making the
PCI shared memory device a trivial device with a single memory BAR,
and using something a higher-level concept like a virtio based
serial line for the actual signalling.

   

That would be much slower.  The current scheme allows for an
ioeventfd/irqfd short circuit which allows one guest to interrupt
another without involving their qemus at all.
 

Yes, the serial line approach would be much slower, but my point
was that we can do signaling over "something else", which could
well be something building on irqfd.
   


Well, we could, but it seems to make things more complicated?  A card 
with shared memory, and another card with an interrupt interconnect?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-11 Thread Arnd Bergmann
On Thursday 11 March 2010, Avi Kivity wrote:
> > A totally different option that avoids this whole problem would
> > be to separate the signalling from the shared memory, making the
> > PCI shared memory device a trivial device with a single memory BAR,
> > and using something a higher-level concept like a virtio based
> > serial line for the actual signalling.
> >
> 
> That would be much slower.  The current scheme allows for an 
> ioeventfd/irqfd short circuit which allows one guest to interrupt 
> another without involving their qemus at all.

Yes, the serial line approach would be much slower, but my point
was that we can do signaling over "something else", which could
well be something building on irqfd.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-11 Thread Paul Brook
> On 03/10/2010 07:41 PM, Paul Brook wrote:
> >>> You're much better off using a bulk-data transfer API that relaxes
> >>> coherency requirements.  IOW, shared memory doesn't make sense for TCG
> >>
> >> Rather, tcg doesn't make sense for shared memory smp.  But we knew that
> >> already.
> >
> > In think TCG SMP is a hard, but soluble problem, especially when you're
> > running guests used to coping with NUMA.
> 
> Do you mean by using a per-cpu tlb?  These kind of solutions are
> generally slow, but tcg's slowness may mask this out.

Yes.

> > TCG interacting with third parties via shared memory is probably never
> > going to make sense.
> 
> The third party in this case is qemu.

Maybe. But it's a different instance of qemu, and once this feature exists I 
bet people will use it for other things.

Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/10/2010 04:04 PM, Arnd Bergmann wrote:

On Tuesday 09 March 2010, Cam Macdonell wrote:
   

We could make the masking in RAM, not in registers, like virtio, which would
require no exits.  It would then be part of the application specific
protocol and out of scope of of this spec.

   

This kind of implementation would be possible now since with UIO it's
up to the application whether to mask interrupts or not and what
interrupts mean.  We could leave the interrupt mask register for those
who want that behaviour.  Arnd's idea would remove the need for the
Doorbell and Mask, but we will always need at least one MMIO register
to send whatever interrupts we do send.
 

You'd also have to be very careful if the notification is in RAM to
avoid races between one guest triggering an interrupt and another
guest clearing its interrupt mask.

A totally different option that avoids this whole problem would
be to separate the signalling from the shared memory, making the
PCI shared memory device a trivial device with a single memory BAR,
and using something a higher-level concept like a virtio based
serial line for the actual signalling.
   


That would be much slower.  The current scheme allows for an 
ioeventfd/irqfd short circuit which allows one guest to interrupt 
another without involving their qemus at all.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/10/2010 06:36 PM, Cam Macdonell wrote:

On Wed, Mar 10, 2010 at 2:21 AM, Avi Kivity  wrote:
   

On 03/09/2010 08:34 PM, Cam Macdonell wrote:
 

On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivitywrote:

   

On 03/09/2010 05:27 PM, Cam Macdonell wrote:

 


   


 

  Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory
server).



   

How does the driver detect whether interrupts are supported or not?


 

At the moment, the VM ID is set to -1 if interrupts aren't supported,
but that may not be the clearest way to do things.  With UIO is there
a way to detect if the interrupt pin is on?


   

I suggest not designing the device to uio.  Make it a good
guest-independent
device, and if uio doesn't fit it, change it.

Why not support interrupts unconditionally?  Is the device useful without
interrupts?

 

Currently my patch works with or without the shared memory server.  If
you give the parameter

-ivshmem 256,foo

then this will create (if necessary) and map /dev/shm/foo as the
shared region without interrupt support.  Some users of shared memory
are using it this way.

Going forward we can require the shared memory server and always have
interrupts enabled.

   

Can you explain how they synchronize?  Polling?  Using the network?  Using
it as a shared cache?

If it's a reasonable use case it makes sense to keep it.

 

Do you mean how they synchronize without interrupts?  One project I've
been contacted about uses the shared region directly for
synchronization for simulations running in different VMs that share
data in the memory region.  In my tests spinlocks in the shared region
work between guests.
   


I see.


If we want to keep the serverless implementation, do we need to
support shm_open with -chardev somehow? Something like -chardev
shm,name=foo.  Right now my qdev implementation just passes the name
to the -device option and opens it.
   


I think using the file name is fine.


Another thing comes to mind - a shared memory ID, in case a guest has
multiple cards.
 

Sure, a number that can be passed on the command-line and stored in a register?
   


Yes.  NICs use the MAC address and storage uses the disk serial number, 
this is the same thing for shared memory.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/10/2010 07:41 PM, Paul Brook wrote:

You're much better off using a bulk-data transfer API that relaxes
coherency requirements.  IOW, shared memory doesn't make sense for TCG
   

Rather, tcg doesn't make sense for shared memory smp.  But we knew that
already.
 

In think TCG SMP is a hard, but soluble problem, especially when you're
running guests used to coping with NUMA.
   


Do you mean by using a per-cpu tlb?  These kind of solutions are 
generally slow, but tcg's slowness may mask this out.



TCG interacting with third parties via shared memory is probably never going
to make sense.
   


The third party in this case is qemu.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Nick Piggin
On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
> Paul Brook wrote:
> > > > In a cross environment that becomes extremely hairy.  For example the 
> > > > x86
> > > > architecture effectively has an implicit write barrier before every
> > > > store, and an implicit read barrier before every load.
> > > 
> > > Btw, x86 doesn't have any implicit barriers due to ordinary loads.
> > > Only stores and atomics have implicit barriers, afaik.
> > 
> > As of March 2009[1] Intel guarantees that memory reads occur in
> > order (they may only be reordered relative to writes). It appears
> > AMD do not provide this guarantee, which could be an interesting
> > problem for heterogeneous migration..
> 
> (Summary: At least on AMD64, it does too, for normal accesses to
> naturally aligned addresses in write-back cacheable memory.)
> 
> Oh, that's interesting.  Way back when I guess we knew writes were in
> order and it wasn't explicit that reads were, hence smp_rmb() using a
> locked atomic.
> 
> Here is a post by Nick Piggin from 2007 with links to Intel _and_ AMD
> documents asserting that reads to cacheable memory are in program order:
> 
> http://lkml.org/lkml/2007/9/28/212
> Subject: [patch] x86: improved memory barrier implementation
> 
> Links to documents:
> 
> http://developer.intel.com/products/processor/manuals/318147.pdf
> 
> http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf
> 
> The Intel link doesn't work any more, but the AMD one does.

It might have been merged into their development manual now.

 
> Nick asserts "both manufacturers are committed to in-order loads from
> cacheable memory for the x86 architecture".

At the time we did ask Intel and AMD engineers. We talked with Andy
Glew from Intel I believe, but I can't recall the AMD contact.
Linus was involved in the discussions as well. We tried to do the
right thing with this.

> I have just read the AMD document, and it is in there (but not
> completely obviously), in section 7.2.  The implicit load-load and
> store-store barriers are only guaranteed for "normal cacheable
> accesses on naturally aligned boundaries to WB [write-back cacheable]
> memory".  There are also implicit load-store barriers but not
> store-load.
> 
> Note that the document covers AMD64; it does not say anything about
> their (now old) 32-bit processors.

Hmm. Well it couldn't hurt to ask again. We've never seen any
problems yet, so I'm rather sure we're in the clear.

> 
> > [*] The most recent docs I have handy. Up to and including Core-2 Duo.
> 
> Are you sure the read ordering applies to 32-bit Intel and AMD CPUs too?
> 
> Many years ago, before 64-bit x86 existed, I recall discussions on
> LKML where it was made clear that stores were performed in program
> order.  If it were known at the time that loads were performed in
> program order on 32-bit x86s, I would have expected that to have been
> mentioned by someone.

The way it was explained to us by the Intel engineer is that they
had implemented only visibly in-order loads, but they wanted to keep
their options open in future so they did not want to commit to in
order loads as an ISA feature.

So when the whitepaper was released we got their blessing to
retroactively apply the rules to previous CPUs.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Jamie Lokier
Paul Brook wrote:
> > > In a cross environment that becomes extremely hairy.  For example the x86
> > > architecture effectively has an implicit write barrier before every
> > > store, and an implicit read barrier before every load.
> > 
> > Btw, x86 doesn't have any implicit barriers due to ordinary loads.
> > Only stores and atomics have implicit barriers, afaik.
> 
> As of March 2009[1] Intel guarantees that memory reads occur in
> order (they may only be reordered relative to writes). It appears
> AMD do not provide this guarantee, which could be an interesting
> problem for heterogeneous migration..

(Summary: At least on AMD64, it does too, for normal accesses to
naturally aligned addresses in write-back cacheable memory.)

Oh, that's interesting.  Way back when I guess we knew writes were in
order and it wasn't explicit that reads were, hence smp_rmb() using a
locked atomic.

Here is a post by Nick Piggin from 2007 with links to Intel _and_ AMD
documents asserting that reads to cacheable memory are in program order:

http://lkml.org/lkml/2007/9/28/212
Subject: [patch] x86: improved memory barrier implementation

Links to documents:

http://developer.intel.com/products/processor/manuals/318147.pdf

http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/24593.pdf

The Intel link doesn't work any more, but the AMD one does.

Nick asserts "both manufacturers are committed to in-order loads from
cacheable memory for the x86 architecture".

I have just read the AMD document, and it is in there (but not
completely obviously), in section 7.2.  The implicit load-load and
store-store barriers are only guaranteed for "normal cacheable
accesses on naturally aligned boundaries to WB [write-back cacheable]
memory".  There are also implicit load-store barriers but not
store-load.

Note that the document covers AMD64; it does not say anything about
their (now old) 32-bit processors.

> [*] The most recent docs I have handy. Up to and including Core-2 Duo.

Are you sure the read ordering applies to 32-bit Intel and AMD CPUs too?

Many years ago, before 64-bit x86 existed, I recall discussions on
LKML where it was made clear that stores were performed in program
order.  If it were known at the time that loads were performed in
program order on 32-bit x86s, I would have expected that to have been
mentioned by someone.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Paul Brook
> > You're much better off using a bulk-data transfer API that relaxes
> > coherency requirements.  IOW, shared memory doesn't make sense for TCG
> 
> Rather, tcg doesn't make sense for shared memory smp.  But we knew that
> already.

In think TCG SMP is a hard, but soluble problem, especially when you're 
running guests used to coping with NUMA.

TCG interacting with third parties via shared memory is probably never going 
to make sense.

Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/10/2010 07:13 PM, Anthony Liguori wrote:

On 03/10/2010 03:25 AM, Avi Kivity wrote:

On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes.  For cross tcg environments you can map the memory using 
mmio callbacks instead of directly, and issue the appropriate 
barriers there.



Not good enough unless you want to severely restrict the use of 
shared memory within the guest.


For instance, it's going to be useful to assume that you atomic 
instructions remain atomic.  Crossing architecture boundaries here 
makes these assumptions invalid.  A barrier is not enough.


You could make the mmio callbacks flow to the shared memory server 
over the unix-domain socket, which would then serialize them.  Still 
need to keep RMWs as single operations.  When the host supports it, 
implement the operation locally (you can't render cmpxchg16b on i386, 
for example).


But now you have a requirement that the shmem server runs in lock-step 
with the guest VCPU which has to happen for every single word of data 
transferred.




Alternative implementation: expose a futex in a shared memory object and 
use that to serialize access.  Now all accesses happen from vcpu 
context, and as long as there is no contention, should be fast, at least 
relative to tcg.


You're much better off using a bulk-data transfer API that relaxes 
coherency requirements.  IOW, shared memory doesn't make sense for TCG 
:-)


Rather, tcg doesn't make sense for shared memory smp.  But we knew that 
already.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Anthony Liguori

On 03/10/2010 03:25 AM, Avi Kivity wrote:

On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes.  For cross tcg environments you can map the memory using 
mmio callbacks instead of directly, and issue the appropriate 
barriers there.



Not good enough unless you want to severely restrict the use of 
shared memory within the guest.


For instance, it's going to be useful to assume that you atomic 
instructions remain atomic.  Crossing architecture boundaries here 
makes these assumptions invalid.  A barrier is not enough.


You could make the mmio callbacks flow to the shared memory server 
over the unix-domain socket, which would then serialize them.  Still 
need to keep RMWs as single operations.  When the host supports it, 
implement the operation locally (you can't render cmpxchg16b on i386, 
for example).


But now you have a requirement that the shmem server runs in lock-step 
with the guest VCPU which has to happen for every single word of data 
transferred.


You're much better off using a bulk-data transfer API that relaxes 
coherency requirements.  IOW, shared memory doesn't make sense for TCG :-)


Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Cam Macdonell
On Wed, Mar 10, 2010 at 2:21 AM, Avi Kivity  wrote:
> On 03/09/2010 08:34 PM, Cam Macdonell wrote:
>>
>> On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivity  wrote:
>>
>>>
>>> On 03/09/2010 05:27 PM, Cam Macdonell wrote:
>>>


>
>
>>
>>  Registers are used
>> for synchronization between guests sharing the same memory object when
>> interrupts are supported (this requires using the shared memory
>> server).
>>
>>
>>
>
> How does the driver detect whether interrupts are supported or not?
>
>

 At the moment, the VM ID is set to -1 if interrupts aren't supported,
 but that may not be the clearest way to do things.  With UIO is there
 a way to detect if the interrupt pin is on?


>>>
>>> I suggest not designing the device to uio.  Make it a good
>>> guest-independent
>>> device, and if uio doesn't fit it, change it.
>>>
>>> Why not support interrupts unconditionally?  Is the device useful without
>>> interrupts?
>>>
>>
>> Currently my patch works with or without the shared memory server.  If
>> you give the parameter
>>
>> -ivshmem 256,foo
>>
>> then this will create (if necessary) and map /dev/shm/foo as the
>> shared region without interrupt support.  Some users of shared memory
>> are using it this way.
>>
>> Going forward we can require the shared memory server and always have
>> interrupts enabled.
>>
>
> Can you explain how they synchronize?  Polling?  Using the network?  Using
> it as a shared cache?
>
> If it's a reasonable use case it makes sense to keep it.
>

Do you mean how they synchronize without interrupts?  One project I've
been contacted about uses the shared region directly for
synchronization for simulations running in different VMs that share
data in the memory region.  In my tests spinlocks in the shared region
work between guests.

If we want to keep the serverless implementation, do we need to
support shm_open with -chardev somehow? Something like -chardev
shm,name=foo.  Right now my qdev implementation just passes the name
to the -device option and opens it.

> Another thing comes to mind - a shared memory ID, in case a guest has
> multiple cards.

Sure, a number that can be passed on the command-line and stored in a register?

Cam
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Arnd Bergmann
On Tuesday 09 March 2010, Cam Macdonell wrote:
> >
> > We could make the masking in RAM, not in registers, like virtio, which would
> > require no exits.  It would then be part of the application specific
> > protocol and out of scope of of this spec.
> >
> 
> This kind of implementation would be possible now since with UIO it's
> up to the application whether to mask interrupts or not and what
> interrupts mean.  We could leave the interrupt mask register for those
> who want that behaviour.  Arnd's idea would remove the need for the
> Doorbell and Mask, but we will always need at least one MMIO register
> to send whatever interrupts we do send.

You'd also have to be very careful if the notification is in RAM to
avoid races between one guest triggering an interrupt and another
guest clearing its interrupt mask.

A totally different option that avoids this whole problem would
be to separate the signalling from the shared memory, making the
PCI shared memory device a trivial device with a single memory BAR,
and using something a higher-level concept like a virtio based
serial line for the actual signalling.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Paul Brook
> >> As of March 2009[1] Intel guarantees that memory reads occur in order
> >> (they may only be reordered relative to writes). It appears AMD do not
> >> provide this guarantee, which could be an interesting problem for
> >> heterogeneous migration..
> >
> > Interesting, but what ordering would cause problems that AMD would do
> > but Intel wouldn't?  Wouldn't that ordering cause the same problems
> > for POSIX shared memory in general (regardless of Qemu) on AMD?
> 
> If some code was written for the Intel guarantees it would break if
> migrated to AMD.  Of course, it would also break if run on AMD in the
> first place.

Right. This is independent of shared memory, and is a case where reporting an 
Intel CPUID on and AMD host might get you into trouble.

Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/10/2010 06:38 AM, Cam Macdonell wrote:

On Tue, Mar 9, 2010 at 5:03 PM, Paul Brook  wrote:
   

In a cross environment that becomes extremely hairy.  For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
 

Btw, x86 doesn't have any implicit barriers due to ordinary loads.
Only stores and atomics have implicit barriers, afaik.
   

As of March 2009[1] Intel guarantees that memory reads occur in order (they
may only be reordered relative to writes). It appears AMD do not provide this
guarantee, which could be an interesting problem for heterogeneous migration..

Paul

[*] The most recent docs I have handy. Up to and including Core-2 Duo.

 

Interesting, but what ordering would cause problems that AMD would do
but Intel wouldn't?  Wouldn't that ordering cause the same problems
for POSIX shared memory in general (regardless of Qemu) on AMD?
   


If some code was written for the Intel guarantees it would break if 
migrated to AMD.  Of course, it would also break if run on AMD in the 
first place.



I think shared memory breaks migration anyway.
   


Until someone implements distributed shared memory.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes.  For cross tcg environments you can map the memory using mmio 
callbacks instead of directly, and issue the appropriate barriers there.



Not good enough unless you want to severely restrict the use of shared 
memory within the guest.


For instance, it's going to be useful to assume that you atomic 
instructions remain atomic.  Crossing architecture boundaries here 
makes these assumptions invalid.  A barrier is not enough.


You could make the mmio callbacks flow to the shared memory server over 
the unix-domain socket, which would then serialize them.  Still need to 
keep RMWs as single operations.  When the host supports it, implement 
the operation locally (you can't render cmpxchg16b on i386, for example).


Shared memory only makes sense when using KVM.  In fact, we should 
actively disable the shared memory device when not using KVM.


Looks like that's the only practical choice.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-10 Thread Avi Kivity

On 03/09/2010 08:34 PM, Cam Macdonell wrote:

On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivity  wrote:
   

On 03/09/2010 05:27 PM, Cam Macdonell wrote:
 
   
 

  Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory server).


   

How does the driver detect whether interrupts are supported or not?

 

At the moment, the VM ID is set to -1 if interrupts aren't supported,
but that may not be the clearest way to do things.  With UIO is there
a way to detect if the interrupt pin is on?

   

I suggest not designing the device to uio.  Make it a good guest-independent
device, and if uio doesn't fit it, change it.

Why not support interrupts unconditionally?  Is the device useful without
interrupts?
 

Currently my patch works with or without the shared memory server.  If
you give the parameter

-ivshmem 256,foo

then this will create (if necessary) and map /dev/shm/foo as the
shared region without interrupt support.  Some users of shared memory
are using it this way.

Going forward we can require the shared memory server and always have
interrupts enabled.
   


Can you explain how they synchronize?  Polling?  Using the network?  
Using it as a shared cache?


If it's a reasonable use case it makes sense to keep it.

Another thing comes to mind - a shared memory ID, in case a guest has 
multiple cards.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Cam Macdonell
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brook  wrote:
>> > In a cross environment that becomes extremely hairy.  For example the x86
>> > architecture effectively has an implicit write barrier before every
>> > store, and an implicit read barrier before every load.
>>
>> Btw, x86 doesn't have any implicit barriers due to ordinary loads.
>> Only stores and atomics have implicit barriers, afaik.
>
> As of March 2009[1] Intel guarantees that memory reads occur in order (they
> may only be reordered relative to writes). It appears AMD do not provide this
> guarantee, which could be an interesting problem for heterogeneous migration..
>
> Paul
>
> [*] The most recent docs I have handy. Up to and including Core-2 Duo.
>

Interesting, but what ordering would cause problems that AMD would do
but Intel wouldn't?  Wouldn't that ordering cause the same problems
for POSIX shared memory in general (regardless of Qemu) on AMD?

I think shared memory breaks migration anyway.

Cam
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Paul Brook
> > In a cross environment that becomes extremely hairy.  For example the x86
> > architecture effectively has an implicit write barrier before every
> > store, and an implicit read barrier before every load.
> 
> Btw, x86 doesn't have any implicit barriers due to ordinary loads.
> Only stores and atomics have implicit barriers, afaik.

As of March 2009[1] Intel guarantees that memory reads occur in order (they 
may only be reordered relative to writes). It appears AMD do not provide this 
guarantee, which could be an interesting problem for heterogeneous migration..

Paul

[*] The most recent docs I have handy. Up to and including Core-2 Duo.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Anthony Liguori

On 03/08/2010 07:16 AM, Avi Kivity wrote:

On 03/08/2010 03:03 PM, Paul Brook wrote:

On 03/08/2010 12:53 AM, Paul Brook wrote:

Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest.  This patch also supports
interrupts between guest by communicating over a unix domain socket.
This patch applies to the qemu-kvm repository.

No. All new devices should be fully qdev based.

I suspect you've also ignored a load of coherency issues, 
especially when

not using KVM. As soon as you have shared memory in more than one host
thread/process you have to worry about memory barriers.

Shouldn't it be sufficient to require the guest to issue barriers (and
to ensure tcg honours the barriers, if someone wants this with tcg)?.
In a cross environment that becomes extremely hairy.  For example the 
x86
architecture effectively has an implicit write barrier before every 
store, and

an implicit read barrier before every load.


Ah yes.  For cross tcg environments you can map the memory using mmio 
callbacks instead of directly, and issue the appropriate barriers there.


Not good enough unless you want to severely restrict the use of shared 
memory within the guest.


For instance, it's going to be useful to assume that you atomic 
instructions remain atomic.  Crossing architecture boundaries here makes 
these assumptions invalid.  A barrier is not enough.


Shared memory only makes sense when using KVM.  In fact, we should 
actively disable the shared memory device when not using KVM.


Regards,

Anthony Liguori


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Anthony Liguori

On 03/08/2010 03:54 AM, Jamie Lokier wrote:

Alexander Graf wrote:
   

Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.

Seriously, we're looking at an interface designed for kvm here. Let's
please keep it as simple and fast as possible for the actual use case,
not some theoretically possible ones.
 

The concern is that a perfectly working guest image running on kvm,
the guest being some OS or app that uses this facility (_not_ a
kvm-only guest driver), is later run on qemu on a different host, and
then mostly works except for some silent data corruption.

That is not a theoretical scenario.
   


Hint: no matter what you do, shared memory is a hack that's going to 
lead to subtle failures one way or another.


It's useful to support because it has some interesting academic uses but 
it's not a mechanism that can ever be used for real world purposes.


It's impossible to support save/restore correctly.  It can never be made 
to work with TCG in a safe way.  That's why I've been advocating keeping 
this as simple as humanly possible.  It's just not worth trying to make 
this fancier than it needs to be because it will never be fully correct.


Regards,

Anthony Liguori


Well, the bit with this driver is theoretical, obviously :-)
But not the bit about moving to a different host.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
   


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Jamie Lokier
Paul Brook wrote:
> > On 03/08/2010 12:53 AM, Paul Brook wrote:
> > >> Support an inter-vm shared memory device that maps a shared-memory
> > >> object as a PCI device in the guest.  This patch also supports
> > >> interrupts between guest by communicating over a unix domain socket. 
> > >> This patch applies to the qemu-kvm repository.
> > >
> > > No. All new devices should be fully qdev based.
> > >
> > > I suspect you've also ignored a load of coherency issues, especially when
> > > not using KVM. As soon as you have shared memory in more than one host
> > > thread/process you have to worry about memory barriers.
> > 
> > Shouldn't it be sufficient to require the guest to issue barriers (and
> > to ensure tcg honours the barriers, if someone wants this with tcg)?.
> 
> In a cross environment that becomes extremely hairy.  For example the x86 
> architecture effectively has an implicit write barrier before every store, 
> and 
> an implicit read barrier before every load.

Btw, x86 doesn't have any implicit barriers due to ordinary loads.
Only stores and atomics have implicit barriers, afaik.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Jamie Lokier
Avi Kivity wrote:
> On 03/08/2010 03:03 PM, Paul Brook wrote:
> >>On 03/08/2010 12:53 AM, Paul Brook wrote:
> >> 
> Support an inter-vm shared memory device that maps a shared-memory
> object as a PCI device in the guest.  This patch also supports
> interrupts between guest by communicating over a unix domain socket.
> This patch applies to the qemu-kvm repository.
>  
> >>>No. All new devices should be fully qdev based.
> >>>
> >>>I suspect you've also ignored a load of coherency issues, especially when
> >>>not using KVM. As soon as you have shared memory in more than one host
> >>>thread/process you have to worry about memory barriers.
> >>>   
> >>Shouldn't it be sufficient to require the guest to issue barriers (and
> >>to ensure tcg honours the barriers, if someone wants this with tcg)?.
> >> 
> >In a cross environment that becomes extremely hairy.  For example the x86
> >architecture effectively has an implicit write barrier before every store, 
> >and
> >an implicit read barrier before every load.
> >   
> 
> Ah yes.  For cross tcg environments you can map the memory using mmio 
> callbacks instead of directly, and issue the appropriate barriers there.

That makes sense.  It will force an mmio callback for every access to
the shared memory, which is ok for correctness but vastly slower when
running in TCG compared with KVM.

But it's hard to see what else could be done - those implicit write
barries on x86 have to be emulated somehow.  For TCG without inter-vm
shared memory, those barriers aren't a problem.

Non-random-corruption guest behaviour is paramount, so I hope the
inter-vm device will add those mmio callbacks for the cross-arch case
before it sees much action.  (Strictly, it isn't cross-arch, but
host-has-more-relaxed-implicit-memory-model-than-guest.  I'm assuming
TCG doesn't reorder memory instructions).

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Jamie Lokier
Paul Brook wrote:
> > However, coherence could be made host-type-independent by the host
> > mapping and unampping pages, so that each page is only mapped into one
> > guest (or guest CPU) at a time.  Just like some clustering filesystems
> > do to maintain coherence.
> 
> You're assuming that a TLB flush implies a write barrier, and a TLB miss 
> implies a read barrier.  I'd be surprised if this were true in general.

The host driver itself can issue full barriers at the same time as it
maps pages on TLB miss, and would probably have to interrupt the
guest's SMP KVM threads to insert a full barrier when broadcasting a
TLB flush on unmap.

-- Jamie

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Cam Macdonell
On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivity  wrote:
> On 03/09/2010 05:27 PM, Cam Macdonell wrote:
>>
>>>
  Registers are used
 for synchronization between guests sharing the same memory object when
 interrupts are supported (this requires using the shared memory server).


>>>
>>> How does the driver detect whether interrupts are supported or not?
>>>
>>
>> At the moment, the VM ID is set to -1 if interrupts aren't supported,
>> but that may not be the clearest way to do things.  With UIO is there
>> a way to detect if the interrupt pin is on?
>>
>
> I suggest not designing the device to uio.  Make it a good guest-independent
> device, and if uio doesn't fit it, change it.
>
> Why not support interrupts unconditionally?  Is the device useful without
> interrupts?

Currently my patch works with or without the shared memory server.  If
you give the parameter

-ivshmem 256,foo

then this will create (if necessary) and map /dev/shm/foo as the
shared region without interrupt support.  Some users of shared memory
are using it this way.

Going forward we can require the shared memory server and always have
interrupts enabled.

>
 The Doorbell register is 16-bits, but is treated as two 8-bit values.
  The
 upper 8-bits are used for the destination VM ID.  The lower 8-bits are
 the
 value which will be written to the destination VM and what the guest
 status
 register will be set to when the interrupt is trigger is the destination
 guest.


>>>
>>> What happens when two interrupts are sent back-to-back to the same guest?
>>>  Will the first status value be lost?
>>>
>>
>> Right now, it would be.  I believe that eventfd has a counting
>> semaphore option, that could prevent loss of status (but limits what
>> the status could be).
>>
>
> It only counts the number of interrupts (and kvm will coalesce them anyway).

Right.

>
>> My understanding of uio_pci interrupt handling
>> is fairly new, but we could have the uio driver store the interrupt
>> statuses to avoid losing them.
>>
>
> There's nowhere to store them if we use ioeventfd/irqfd.  I think it's both
> easier and more efficient to leave this to the application (to store into
> shared memory).

Agreed.

Cam
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Anthony Liguori

On 03/09/2010 11:28 AM, Avi Kivity wrote:

On 03/09/2010 05:27 PM, Cam Macdonell wrote:





  Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory 
server).



How does the driver detect whether interrupts are supported or not?

At the moment, the VM ID is set to -1 if interrupts aren't supported,
but that may not be the clearest way to do things.  With UIO is there
a way to detect if the interrupt pin is on?


I suggest not designing the device to uio.  Make it a good 
guest-independent device, and if uio doesn't fit it, change it.


You can always fall back to reading the config space directly.  It's not 
strictly required that you stick to the UIO interface.


Why not support interrupts unconditionally?  Is the device useful 
without interrupts?


You can always just have interrupts enabled and not use them if that's 
desired.


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Avi Kivity

On 03/09/2010 05:27 PM, Cam Macdonell wrote:





  Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory server).

   

How does the driver detect whether interrupts are supported or not?
 

At the moment, the VM ID is set to -1 if interrupts aren't supported,
but that may not be the clearest way to do things.  With UIO is there
a way to detect if the interrupt pin is on?
   


I suggest not designing the device to uio.  Make it a good 
guest-independent device, and if uio doesn't fit it, change it.


Why not support interrupts unconditionally?  Is the device useful 
without interrupts?



The Doorbell register is 16-bits, but is treated as two 8-bit values.  The
upper 8-bits are used for the destination VM ID.  The lower 8-bits are the
value which will be written to the destination VM and what the guest
status
register will be set to when the interrupt is trigger is the destination
guest.

   

What happens when two interrupts are sent back-to-back to the same guest?
  Will the first status value be lost?
 

Right now, it would be.  I believe that eventfd has a counting
semaphore option, that could prevent loss of status (but limits what
the status could be).
   


It only counts the number of interrupts (and kvm will coalesce them anyway).


My understanding of uio_pci interrupt handling
is fairly new, but we could have the uio driver store the interrupt
statuses to avoid losing them.
   


There's nowhere to store them if we use ioeventfd/irqfd.  I think it's 
both easier and more efficient to leave this to the application (to 
store into shared memory).


--

error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Cam Macdonell
On Tue, Mar 9, 2010 at 6:03 AM, Avi Kivity  wrote:
> On 03/09/2010 02:49 PM, Arnd Bergmann wrote:
>>
>> On Monday 08 March 2010, Cam Macdonell wrote:
>>
>>>
>>> enum ivshmem_registers {
>>>     IntrMask = 0,
>>>     IntrStatus = 2,
>>>     Doorbell = 4,
>>>     IVPosition = 6,
>>>     IVLiveList = 8
>>> };
>>>
>>> The first two registers are the interrupt mask and status registers.
>>> Interrupts are triggered when a message is received on the guest's
>>> eventfd from
>>> another VM.  Writing to the 'Doorbell' register is how synchronization
>>> messages
>>> are sent to other VMs.
>>>
>>> The IVPosition register is read-only and reports the guest's ID number.
>>>  The
>>> IVLiveList register is also read-only and reports a bit vector of
>>> currently
>>> live VM IDs.
>>>
>>> The Doorbell register is 16-bits, but is treated as two 8-bit values.
>>>  The
>>> upper 8-bits are used for the destination VM ID.  The lower 8-bits are
>>> the
>>> value which will be written to the destination VM and what the guest
>>> status
>>> register will be set to when the interrupt is trigger is the destination
>>> guest.
>>> A value of 255 in the upper 8-bits will trigger a broadcast where the
>>> message
>>> will be sent to all other guests.
>>>
>>
>> This means you have at least two intercepts for each message:
>>
>> 1. Sender writes to doorbell
>> 2. Receiver gets interrupted
>>
>> With optionally two more intercepts in order to avoid interrupting the
>> receiver every time:
>>
>> 3. Receiver masks interrupt in order to process data
>> 4. Receiver unmasks interrupt when it's done and status is no longer
>> pending
>>
>> I believe you can do much better than this, you combine status and mask
>> bits, making this level triggered, and move to a bitmask of all guests:
>>
>> In order to send an interrupt to another guest, the sender first checks
>> the bit for the receiver. If it's '1', no need for any intercept, the
>> receiver will come back anyway. If it's zero, write a '1' bit, which
>> gets OR'd into the bitmask by the host. The receiver gets interrupted
>> at a raising edge and just leaves the bit on, until it's done processing,
>> then turns the bit off by writing a '1' into its own location in the mask.
>>
>
> We could make the masking in RAM, not in registers, like virtio, which would
> require no exits.  It would then be part of the application specific
> protocol and out of scope of of this spec.
>

This kind of implementation would be possible now since with UIO it's
up to the application whether to mask interrupts or not and what
interrupts mean.  We could leave the interrupt mask register for those
who want that behaviour.  Arnd's idea would remove the need for the
Doorbell and Mask, but we will always need at least one MMIO register
to send whatever interrupts we do send.

Cam
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Avi Kivity

On 03/09/2010 02:49 PM, Arnd Bergmann wrote:

On Monday 08 March 2010, Cam Macdonell wrote:
   

enum ivshmem_registers {
 IntrMask = 0,
 IntrStatus = 2,
 Doorbell = 4,
 IVPosition = 6,
 IVLiveList = 8
};

The first two registers are the interrupt mask and status registers.
Interrupts are triggered when a message is received on the guest's eventfd from
another VM.  Writing to the 'Doorbell' register is how synchronization messages
are sent to other VMs.

The IVPosition register is read-only and reports the guest's ID number.  The
IVLiveList register is also read-only and reports a bit vector of currently
live VM IDs.

The Doorbell register is 16-bits, but is treated as two 8-bit values.  The
upper 8-bits are used for the destination VM ID.  The lower 8-bits are the
value which will be written to the destination VM and what the guest status
register will be set to when the interrupt is trigger is the destination guest.
A value of 255 in the upper 8-bits will trigger a broadcast where the message
will be sent to all other guests.
 

This means you have at least two intercepts for each message:

1. Sender writes to doorbell
2. Receiver gets interrupted

With optionally two more intercepts in order to avoid interrupting the
receiver every time:

3. Receiver masks interrupt in order to process data
4. Receiver unmasks interrupt when it's done and status is no longer pending

I believe you can do much better than this, you combine status and mask
bits, making this level triggered, and move to a bitmask of all guests:

In order to send an interrupt to another guest, the sender first checks
the bit for the receiver. If it's '1', no need for any intercept, the
receiver will come back anyway. If it's zero, write a '1' bit, which
gets OR'd into the bitmask by the host. The receiver gets interrupted
at a raising edge and just leaves the bit on, until it's done processing,
then turns the bit off by writing a '1' into its own location in the mask.
   


We could make the masking in RAM, not in registers, like virtio, which 
would require no exits.  It would then be part of the application 
specific protocol and out of scope of of this spec.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Cam Macdonell
On Tue, Mar 9, 2010 at 3:29 AM, Avi Kivity  wrote:
> On 03/08/2010 07:57 PM, Cam Macdonell wrote:
>>
>>> Can you provide a spec that describes the device?  This would be useful
>>> for
>>> maintaining the code, writing guest drivers, and as a framework for
>>> review.
>>>
>>
>> I'm not sure if you want the Qemu command-line part as part of the
>> spec here, but I've included for completeness.
>>
>
> I meant something from the guest's point of view, so command line syntax is
> less important.  It should be equally applicable to a real PCI card that
> works with the same driver.
>
> See http://ozlabs.org/~rusty/virtio-spec/ for an example.
>
>> The Inter-VM Shared Memory PCI device
>> ---
>>
>> BARs
>>
>> The device supports two BARs.  BAR0 is a 256-byte MMIO region to
>> support registers
>>
>
> (but might be extended in the future)
>
>> and BAR1 is used to map the shared memory object from the host.  The size
>> of
>> BAR1 is specified on the command-line and must be a power of 2 in size.
>>
>> Registers
>>
>> BAR0 currently supports 5 registers of 16-bits each.
>
> Suggest making registers 32-bits, friendlier towards non-x86.
>
>>  Registers are used
>> for synchronization between guests sharing the same memory object when
>> interrupts are supported (this requires using the shared memory server).
>>
>
> How does the driver detect whether interrupts are supported or not?

At the moment, the VM ID is set to -1 if interrupts aren't supported,
but that may not be the clearest way to do things.  With UIO is there
a way to detect if the interrupt pin is on?

>
>> When using interrupts, VMs communicate with a shared memory server that
>> passes
>> the shared memory object file descriptor using SCM_RIGHTS.  The server
>> assigns
>> each VM an ID number and sends this ID number to the Qemu process along
>> with a
>> series of eventfd file descriptors, one per guest using the shared memory
>> server.  These eventfds will be used to send interrupts between guests.
>>  Each
>> guest listens on the eventfd corresponding to their ID and may use the
>> others
>> for sending interrupts to other guests.
>>
>> enum ivshmem_registers {
>>     IntrMask = 0,
>>     IntrStatus = 2,
>>     Doorbell = 4,
>>     IVPosition = 6,
>>     IVLiveList = 8
>> };
>>
>> The first two registers are the interrupt mask and status registers.
>> Interrupts are triggered when a message is received on the guest's eventfd
>> from
>> another VM.  Writing to the 'Doorbell' register is how synchronization
>> messages
>> are sent to other VMs.
>>
>> The IVPosition register is read-only and reports the guest's ID number.
>>  The
>> IVLiveList register is also read-only and reports a bit vector of
>> currently
>> live VM IDs.
>>
>
> That limits the number of guests to 16.

True, it could grow to 32 or 64 without difficulty.  We could leave
'liveness' to the user (could be implemented using the shared memory
region) or via the interrupts that arrive on guest attach/detach as
you suggest below..

>
>> The Doorbell register is 16-bits, but is treated as two 8-bit values.  The
>> upper 8-bits are used for the destination VM ID.  The lower 8-bits are the
>> value which will be written to the destination VM and what the guest
>> status
>> register will be set to when the interrupt is trigger is the destination
>> guest.
>>
>
> What happens when two interrupts are sent back-to-back to the same guest?
>  Will the first status value be lost?

Right now, it would be.  I believe that eventfd has a counting
semaphore option, that could prevent loss of status (but limits what
the status could be).  My understanding of uio_pci interrupt handling
is fairly new, but we could have the uio driver store the interrupt
statuses to avoid losing them.

>
> Also, reading the status register requires a vmexit.  I suggest dropping it
> and requiring the application to manage this information in the shared
> memory area (where it could do proper queueing of multiple messages).
>
>> A value of 255 in the upper 8-bits will trigger a broadcast where the
>> message
>> will be sent to all other guests.
>>
>
> Please consider adding:
>
> - MSI support

Sure, I'll look into it.

> - interrupt on a guest attaching/detaching to the shared memory device

Sure.

>
> With MSI you could also have the doorbell specify both guest ID and vector
> number, which may be useful.
>
> Thanks for this - it definitely makes reviewing easier.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Arnd Bergmann
On Monday 08 March 2010, Cam Macdonell wrote:
> enum ivshmem_registers {
> IntrMask = 0,
> IntrStatus = 2,
> Doorbell = 4,
> IVPosition = 6,
> IVLiveList = 8
> };
> 
> The first two registers are the interrupt mask and status registers.
> Interrupts are triggered when a message is received on the guest's eventfd 
> from
> another VM.  Writing to the 'Doorbell' register is how synchronization 
> messages
> are sent to other VMs.
> 
> The IVPosition register is read-only and reports the guest's ID number.  The
> IVLiveList register is also read-only and reports a bit vector of currently
> live VM IDs.
> 
> The Doorbell register is 16-bits, but is treated as two 8-bit values.  The
> upper 8-bits are used for the destination VM ID.  The lower 8-bits are the
> value which will be written to the destination VM and what the guest status
> register will be set to when the interrupt is trigger is the destination 
> guest.
> A value of 255 in the upper 8-bits will trigger a broadcast where the message
> will be sent to all other guests.

This means you have at least two intercepts for each message:

1. Sender writes to doorbell
2. Receiver gets interrupted

With optionally two more intercepts in order to avoid interrupting the
receiver every time:

3. Receiver masks interrupt in order to process data
4. Receiver unmasks interrupt when it's done and status is no longer pending

I believe you can do much better than this, you combine status and mask
bits, making this level triggered, and move to a bitmask of all guests:

In order to send an interrupt to another guest, the sender first checks
the bit for the receiver. If it's '1', no need for any intercept, the
receiver will come back anyway. If it's zero, write a '1' bit, which
gets OR'd into the bitmask by the host. The receiver gets interrupted
at a raising edge and just leaves the bit on, until it's done processing,
then turns the bit off by writing a '1' into its own location in the mask.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-09 Thread Avi Kivity

On 03/08/2010 07:57 PM, Cam Macdonell wrote:



Can you provide a spec that describes the device?  This would be useful for
maintaining the code, writing guest drivers, and as a framework for review.
 

I'm not sure if you want the Qemu command-line part as part of the
spec here, but I've included for completeness.
   


I meant something from the guest's point of view, so command line syntax 
is less important.  It should be equally applicable to a real PCI card 
that works with the same driver.


See http://ozlabs.org/~rusty/virtio-spec/ for an example.


The Inter-VM Shared Memory PCI device
---

BARs

The device supports two BARs.  BAR0 is a 256-byte MMIO region to
support registers
   


(but might be extended in the future)


and BAR1 is used to map the shared memory object from the host.  The size of
BAR1 is specified on the command-line and must be a power of 2 in size.

Registers

BAR0 currently supports 5 registers of 16-bits each.


Suggest making registers 32-bits, friendlier towards non-x86.


  Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory server).
   


How does the driver detect whether interrupts are supported or not?


When using interrupts, VMs communicate with a shared memory server that passes
the shared memory object file descriptor using SCM_RIGHTS.  The server assigns
each VM an ID number and sends this ID number to the Qemu process along with a
series of eventfd file descriptors, one per guest using the shared memory
server.  These eventfds will be used to send interrupts between guests.  Each
guest listens on the eventfd corresponding to their ID and may use the others
for sending interrupts to other guests.

enum ivshmem_registers {
 IntrMask = 0,
 IntrStatus = 2,
 Doorbell = 4,
 IVPosition = 6,
 IVLiveList = 8
};

The first two registers are the interrupt mask and status registers.
Interrupts are triggered when a message is received on the guest's eventfd from
another VM.  Writing to the 'Doorbell' register is how synchronization messages
are sent to other VMs.

The IVPosition register is read-only and reports the guest's ID number.  The
IVLiveList register is also read-only and reports a bit vector of currently
live VM IDs.
   


That limits the number of guests to 16.


The Doorbell register is 16-bits, but is treated as two 8-bit values.  The
upper 8-bits are used for the destination VM ID.  The lower 8-bits are the
value which will be written to the destination VM and what the guest status
register will be set to when the interrupt is trigger is the destination guest.
   


What happens when two interrupts are sent back-to-back to the same 
guest?  Will the first status value be lost?


Also, reading the status register requires a vmexit.  I suggest dropping 
it and requiring the application to manage this information in the 
shared memory area (where it could do proper queueing of multiple messages).



A value of 255 in the upper 8-bits will trigger a broadcast where the message
will be sent to all other guests.
   


Please consider adding:

- MSI support
- interrupt on a guest attaching/detaching to the shared memory device

With MSI you could also have the doorbell specify both guest ID and 
vector number, which may be useful.


Thanks for this - it definitely makes reviewing easier.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Cam Macdonell
On Mon, Mar 8, 2010 at 2:56 AM, Avi Kivity  wrote:
> On 03/06/2010 01:52 AM, Cam Macdonell wrote:
>>
>> Support an inter-vm shared memory device that maps a shared-memory object
>> as a PCI device in the guest.  This patch also supports interrupts between
>> guest by communicating over a unix domain socket.  This patch applies to
>> the
>> qemu-kvm repository.
>>
>> This device now creates a qemu character device and sends 1-bytes messages
>> to
>> trigger interrupts.  Writes are trigger by writing to the "Doorbell"
>> register
>> on the shared memory PCI device.  The lower 8-bits of the value written to
>> this
>> register are sent as the 1-byte message so different meanings of
>> interrupts can
>> be supported.
>>
>> Interrupts are supported between multiple VMs by using a shared memory
>> server
>>
>> -ivshmem,[unix:][file]
>>
>> Interrupts can also be used between host and guest as well by implementing
>> a
>> listener on the host that talks to shared memory server.  The shared
>> memory
>> server passes file descriptors for the shared memory object and eventfds
>> (our
>> interrupt mechanism) to the respective qemu instances.
>>
>>
>
> Can you provide a spec that describes the device?  This would be useful for
> maintaining the code, writing guest drivers, and as a framework for review.

I'm not sure if you want the Qemu command-line part as part of the
spec here, but I've included for completeness.

Device Specification for Inter-VM shared memory device
---

Qemu Command-line
---

The command-line for inter-vm shared memory is as follows

-ivshmem ,[unix:]name

the  argument specifies the size of the shared memory object.  The second
option specifies either a unix domain socket (when using the unix: prefix) or a
name for the shared memory object.

If a unix domain socket is specified, the guest will receive the shared object
from the shared memory server listening on that socket and will support
interrupts with the other guests using that server.  Each server only serves
one memory object.

If a name is specified on the command line (without 'unix:'), then the guest
will open the POSIX shared memory object with that name (in /dev/shm) and the
specified size.  The guest will NOT support interrupts but the shared memory
object can be shared between multiple guests.

The Inter-VM Shared Memory PCI device
---

BARs

The device supports two BARs.  BAR0 is a 256-byte MMIO region to
support registers
and BAR1 is used to map the shared memory object from the host.  The size of
BAR1 is specified on the command-line and must be a power of 2 in size.

Registers

BAR0 currently supports 5 registers of 16-bits each.  Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory server).

When using interrupts, VMs communicate with a shared memory server that passes
the shared memory object file descriptor using SCM_RIGHTS.  The server assigns
each VM an ID number and sends this ID number to the Qemu process along with a
series of eventfd file descriptors, one per guest using the shared memory
server.  These eventfds will be used to send interrupts between guests.  Each
guest listens on the eventfd corresponding to their ID and may use the others
for sending interrupts to other guests.

enum ivshmem_registers {
IntrMask = 0,
IntrStatus = 2,
Doorbell = 4,
IVPosition = 6,
IVLiveList = 8
};

The first two registers are the interrupt mask and status registers.
Interrupts are triggered when a message is received on the guest's eventfd from
another VM.  Writing to the 'Doorbell' register is how synchronization messages
are sent to other VMs.

The IVPosition register is read-only and reports the guest's ID number.  The
IVLiveList register is also read-only and reports a bit vector of currently
live VM IDs.

The Doorbell register is 16-bits, but is treated as two 8-bit values.  The
upper 8-bits are used for the destination VM ID.  The lower 8-bits are the
value which will be written to the destination VM and what the guest status
register will be set to when the interrupt is trigger is the destination guest.
A value of 255 in the upper 8-bits will trigger a broadcast where the message
will be sent to all other guests.

Cheers,
Cam

>
> --
> error compiling committee.c: too many arguments to function
>
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Avi Kivity

On 03/08/2010 03:03 PM, Paul Brook wrote:

On 03/08/2010 12:53 AM, Paul Brook wrote:
 

Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest.  This patch also supports
interrupts between guest by communicating over a unix domain socket.
This patch applies to the qemu-kvm repository.
 

No. All new devices should be fully qdev based.

I suspect you've also ignored a load of coherency issues, especially when
not using KVM. As soon as you have shared memory in more than one host
thread/process you have to worry about memory barriers.
   

Shouldn't it be sufficient to require the guest to issue barriers (and
to ensure tcg honours the barriers, if someone wants this with tcg)?.
 

In a cross environment that becomes extremely hairy.  For example the x86
architecture effectively has an implicit write barrier before every store, and
an implicit read barrier before every load.
   


Ah yes.  For cross tcg environments you can map the memory using mmio 
callbacks instead of directly, and issue the appropriate barriers there.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Paul Brook
> However, coherence could be made host-type-independent by the host
> mapping and unampping pages, so that each page is only mapped into one
> guest (or guest CPU) at a time.  Just like some clustering filesystems
> do to maintain coherence.

You're assuming that a TLB flush implies a write barrier, and a TLB miss 
implies a read barrier.  I'd be surprised if this were true in general.

Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Paul Brook
> On 03/08/2010 12:53 AM, Paul Brook wrote:
> >> Support an inter-vm shared memory device that maps a shared-memory
> >> object as a PCI device in the guest.  This patch also supports
> >> interrupts between guest by communicating over a unix domain socket. 
> >> This patch applies to the qemu-kvm repository.
> >
> > No. All new devices should be fully qdev based.
> >
> > I suspect you've also ignored a load of coherency issues, especially when
> > not using KVM. As soon as you have shared memory in more than one host
> > thread/process you have to worry about memory barriers.
> 
> Shouldn't it be sufficient to require the guest to issue barriers (and
> to ensure tcg honours the barriers, if someone wants this with tcg)?.

In a cross environment that becomes extremely hairy.  For example the x86 
architecture effectively has an implicit write barrier before every store, and 
an implicit read barrier before every load.

Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Alexander Graf
Jamie Lokier wrote:
> Alexander Graf wrote:
>   
>> Or we could put in some code that tells the guest the host shm  
>> architecture and only accept x86 on x86 for now. If anyone cares for  
>> other combinations, they're free to implement them.
>>
>> Seriously, we're looking at an interface designed for kvm here. Let's  
>> please keep it as simple and fast as possible for the actual use case,  
>> not some theoretically possible ones.
>> 
>
> The concern is that a perfectly working guest image running on kvm,
> the guest being some OS or app that uses this facility (_not_ a
> kvm-only guest driver), is later run on qemu on a different host, and
> then mostly works except for some silent data corruption.
>
> That is not a theoretical scenario.
>
> Well, the bit with this driver is theoretical, obviously :-)
> But not the bit about moving to a different host.
>   

I agree. Hence there should be a safety check so people can't corrupt
their data silently.

Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Avi Kivity

On 03/06/2010 01:52 AM, Cam Macdonell wrote:

Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest.  This patch also supports interrupts between
guest by communicating over a unix domain socket.  This patch applies to the
qemu-kvm repository.

This device now creates a qemu character device and sends 1-bytes messages to
trigger interrupts.  Writes are trigger by writing to the "Doorbell" register
on the shared memory PCI device.  The lower 8-bits of the value written to this
register are sent as the 1-byte message so different meanings of interrupts can
be supported.

Interrupts are supported between multiple VMs by using a shared memory server

-ivshmem,[unix:][file]

Interrupts can also be used between host and guest as well by implementing a
listener on the host that talks to shared memory server.  The shared memory
server passes file descriptors for the shared memory object and eventfds (our
interrupt mechanism) to the respective qemu instances.

   


Can you provide a spec that describes the device?  This would be useful 
for maintaining the code, writing guest drivers, and as a framework for 
review.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Jamie Lokier
Alexander Graf wrote:
> Or we could put in some code that tells the guest the host shm  
> architecture and only accept x86 on x86 for now. If anyone cares for  
> other combinations, they're free to implement them.
> 
> Seriously, we're looking at an interface designed for kvm here. Let's  
> please keep it as simple and fast as possible for the actual use case,  
> not some theoretically possible ones.

The concern is that a perfectly working guest image running on kvm,
the guest being some OS or app that uses this facility (_not_ a
kvm-only guest driver), is later run on qemu on a different host, and
then mostly works except for some silent data corruption.

That is not a theoretical scenario.

Well, the bit with this driver is theoretical, obviously :-)
But not the bit about moving to a different host.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Avi Kivity

On 03/08/2010 12:53 AM, Paul Brook wrote:

Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest.  This patch also supports interrupts between
guest by communicating over a unix domain socket.  This patch applies to
  the qemu-kvm repository.
 

No. All new devices should be fully qdev based.

I suspect you've also ignored a load of coherency issues, especially when not
using KVM. As soon as you have shared memory in more than one host
thread/process you have to worry about memory barriers.
   


Shouldn't it be sufficient to require the guest to issue barriers (and 
to ensure tcg honours the barriers, if someone wants this with tcg)?.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-08 Thread Alexander Graf


Am 08.03.2010 um 02:45 schrieb Jamie Lokier :


Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory  
object
as a PCI device in the guest.  This patch also supports interrupts  
between
guest by communicating over a unix domain socket.  This patch  
applies to

the qemu-kvm repository.


No. All new devices should be fully qdev based.

I suspect you've also ignored a load of coherency issues,  
especially when not

using KVM. As soon as you have shared memory in more than one host
thread/process you have to worry about memory barriers.


Yes. Guest-observable behaviour is likely to be quite different on
different hosts, expecially beteen x86 and non-x86 hosts, which is not
good at all for emulation.

Memory barriers performed by the guest would help, but would not
remove the fact that behaviour would vary beteen different host types
if a guest doesn't call them.  I.e. you could accidentally have some
guests working fine for years on x86 hosts, which gain subtle
memory corruption as soon as you run them on a different host.

This is acceptable when recompiling code for different architectures,
but it's asking for trouble with binary guest images which aren't
supposed to depend on host architecture.

However, coherence could be made host-type-independent by the host
mapping and unampping pages, so that each page is only mapped into one
guest (or guest CPU) at a time.  Just like some clustering filesystems
do to maintain coherence.


Or we could put in some code that tells the guest the host shm  
architecture and only accept x86 on x86 for now. If anyone cares for  
other combinations, they're free to implement them.


Seriously, we're looking at an interface designed for kvm here. Let's  
please keep it as simple and fast as possible for the actual use case,  
not some theoretically possible ones.



Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-07 Thread Jamie Lokier
Paul Brook wrote:
> > Support an inter-vm shared memory device that maps a shared-memory object
> > as a PCI device in the guest.  This patch also supports interrupts between
> > guest by communicating over a unix domain socket.  This patch applies to
> >  the qemu-kvm repository.
> 
> No. All new devices should be fully qdev based.
> 
> I suspect you've also ignored a load of coherency issues, especially when not 
> using KVM. As soon as you have shared memory in more than one host 
> thread/process you have to worry about memory barriers.

Yes. Guest-observable behaviour is likely to be quite different on
different hosts, expecially beteen x86 and non-x86 hosts, which is not
good at all for emulation.

Memory barriers performed by the guest would help, but would not
remove the fact that behaviour would vary beteen different host types
if a guest doesn't call them.  I.e. you could accidentally have some
guests working fine for years on x86 hosts, which gain subtle
memory corruption as soon as you run them on a different host.

This is acceptable when recompiling code for different architectures,
but it's asking for trouble with binary guest images which aren't
supposed to depend on host architecture.

However, coherence could be made host-type-independent by the host
mapping and unampping pages, so that each page is only mapped into one
guest (or guest CPU) at a time.  Just like some clustering filesystems
do to maintain coherence.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device

2010-03-07 Thread Paul Brook
> Support an inter-vm shared memory device that maps a shared-memory object
> as a PCI device in the guest.  This patch also supports interrupts between
> guest by communicating over a unix domain socket.  This patch applies to
>  the qemu-kvm repository.

No. All new devices should be fully qdev based.

I suspect you've also ignored a load of coherency issues, especially when not 
using KVM. As soon as you have shared memory in more than one host 
thread/process you have to worry about memory barriers.

Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Inter-VM shared memory PCI device

2010-03-05 Thread Cam Macdonell
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest.  This patch also supports interrupts between
guest by communicating over a unix domain socket.  This patch applies to the
qemu-kvm repository.

This device now creates a qemu character device and sends 1-bytes messages to
trigger interrupts.  Writes are trigger by writing to the "Doorbell" register
on the shared memory PCI device.  The lower 8-bits of the value written to this
register are sent as the 1-byte message so different meanings of interrupts can
be supported.

Interrupts are supported between multiple VMs by using a shared memory server

-ivshmem ,[unix:][file]

Interrupts can also be used between host and guest as well by implementing a
listener on the host that talks to shared memory server.  The shared memory
server passes file descriptors for the shared memory object and eventfds (our
interrupt mechanism) to the respective qemu instances.

Sample programs, init scripts and the shared memory server are available in a
git repo here:

www.gitorious.org/nahanni
---
 Makefile.target |3 +
 hw/ivshmem.c|  561 +++
 hw/pc.c |6 +
 hw/pc.h |3 +
 qemu-char.c |6 +
 qemu-char.h |3 +
 qemu-options.hx |   12 ++
 sysemu.h|8 +
 vl.c|   13 ++
 9 files changed, 615 insertions(+), 0 deletions(-)
 create mode 100644 hw/ivshmem.c

diff --git a/Makefile.target b/Makefile.target
index 82caf20..921dc74 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -217,6 +217,9 @@ obj-y += pcnet.o
 obj-y += rtl8139.o
 obj-y += e1000.o
 
+# Inter-VM PCI shared memory
+obj-y += ivshmem.o
+
 # Hardware support
 obj-i386-y = ide/core.o ide/qdev.o ide/isa.o ide/pci.o ide/piix.o
 obj-i386-y += pckbd.o $(sound-obj-y) dma.o
diff --git a/hw/ivshmem.c b/hw/ivshmem.c
new file mode 100644
index 000..aa88c07
--- /dev/null
+++ b/hw/ivshmem.c
@@ -0,0 +1,561 @@
+/*
+ * Inter-VM Shared Memory PCI device.
+ *
+ * Author:
+ *  Cam Macdonell 
+ *
+ * Based On: cirrus_vga.c and rtl8139.c
+ *
+ * This code is licensed under the GNU GPL v2.
+ */
+
+#include "hw.h"
+#include "console.h"
+#include "pc.h"
+#include "pci.h"
+#include "sysemu.h"
+
+#include "qemu-common.h"
+#include 
+#include 
+
+#define PCI_COMMAND_IOACCESS0x0001
+#define PCI_COMMAND_MEMACCESS   0x0002
+#define PCI_COMMAND_BUSMASTER   0x0004
+
+#define DEBUG_IVSHMEM
+#define MAX_EVENT_FDS 16
+
+#ifdef DEBUG_IVSHMEM
+#define IVSHMEM_DPRINTF(fmt, args...)\
+do {printf("IVSHMEM: " fmt, ##args); } while (0)
+#else
+#define IVSHMEM_DPRINTF(fmt, args...)
+#endif
+
+#define BROADCAST_VAL ((1 << 8) - 1)
+
+typedef struct IVShmemState {
+uint16_t intrmask;
+uint16_t intrstatus;
+uint16_t doorbell;
+
+PCIDevice *pci_dev;
+CharDriverState * chr;
+CharDriverState * eventfd_chr;
+int ivshmem_mmio_io_addr;
+
+uint8_t *ivshmem_ptr;
+unsigned long ivshmem_offset;
+unsigned int ivshmem_size;
+int shm_fd; /* shared memory file descriptor */
+
+int eventfds[16]; /* for now we have a limit of 16 inter-connected guests 
*/
+int eventfd_posn;
+uint16_t eventfd_bitvec;
+int num_eventfds;
+} IVShmemState;
+
+typedef struct PCI_IVShmemState {
+PCIDevice dev;
+IVShmemState ivshmem_state;
+} PCI_IVShmemState;
+
+typedef struct IVShmemDesc {
+char * chrdev;
+int size;
+} IVShmemDesc;
+
+/* registers for the Inter-VM shared memory device */
+enum ivshmem_registers {
+IntrMask = 0,
+IntrStatus = 2,
+Doorbell = 4,
+IVPosition = 6,
+IVLiveList = 8
+};
+
+static int num_ivshmem_devices = 0;
+static IVShmemDesc ivshmem_desc;
+
+static void ivshmem_map(PCIDevice *pci_dev, int region_num,
+pcibus_t addr, pcibus_t size, int type)
+{
+PCI_IVShmemState *d = (PCI_IVShmemState *)pci_dev;
+IVShmemState *s = &d->ivshmem_state;
+
+IVSHMEM_DPRINTF("addr = %u size = %u\n", (uint32_t)addr, (uint32_t)size);
+cpu_register_physical_memory(addr, s->ivshmem_size, s->ivshmem_offset);
+
+}
+
+void ivshmem_init(const char * optarg) {
+
+char * temp;
+char * ivshmem_sz;
+int size;
+
+num_ivshmem_devices++;
+
+/* currently we only support 1 device */
+if (num_ivshmem_devices > MAX_IVSHMEM_DEVICES) {
+return;
+}
+
+temp = strdup(optarg);
+
+ivshmem_sz=strsep(&temp,",");
+
+if (ivshmem_sz != NULL) {
+size = atol(ivshmem_sz);
+} else {
+size = -1;
+}
+
+ivshmem_desc.chrdev = strsep(&temp,"\0");
+
+if ( size == -1) {
+ivshmem_desc.size = TARGET_PAGE_SIZE;
+} else {
+ivshmem_desc.size = size*1024*1024;
+}
+
+}
+
+int ivshmem_get_size(void) {
+return ivshmem_desc.size;
+}
+
+static void broadcast_eventfds(int val, IVShmemState *s)
+{
+
+int dest = val >> 4;
+u_int64_t writelong = val & 0xff;
+
+for (dest = 1; dest < s