Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-08 Thread Avi Kivity

On 10/07/2009 02:48 PM, Gregory Haskins wrote:



If f() can cause another agent to write to p (by freeing
it to a global list, for example), then it is its responsibility to
issue the smp_rmb(), otherwise no calculation that took place before f()
and accessed p is safe.

 

IOW: David is right.  You need a cpu-barrier one way or the other.  We
can either allow -release() to imply one (and probably document it that
way, like we did for slow-work), or we can be explicit.


No, -release() must do it or it becomes impossible to program.  And in 
fact it will, to place the freed structure on a global list it must take 
a lock which implies an smp_rmb().



I chose to be
explicit since it is kind of self-documenting, and there is no need to
be worried about performance since the release is slow-path.
   


It's so self-documenting I had no idea what it was there for.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Alacrityvm-devel] [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-07 Thread Avi Kivity

On 10/06/2009 08:18 PM, Ira W. Snyder wrote:


The limitation I have is that memory made available from the host system
(PCI card) as PCI BAR1 must not be migrated around in memory. I can only
change the address decoding to hit a specific physical address. AFAIK,
this means it cannot be userspace memory (since the underlying physical
page could change, or it could be in swap), and must be allocated with
something like __get_free_pages() or dma_alloc_coherent().
   


Expose it as /dev/something (/dev/mem, /sys/.../pci/...) and mmap() it, 
and it becomes non-pageable user memory.


Not sure about dma_alloc_coherent(), that is meaningless on x86.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-07 Thread Avi Kivity

On 10/06/2009 09:40 PM, Gregory Haskins wrote:

Thinking about this some more over lunch, I think we (Avi and I) might
both be wrong (and David is right).  Avi is right that we don't need
rmb() or barrier() for the reasons already stated, but I think David is
right that we need an smp_mb() to ensure the cpu doesn't do the
reordering.  Otherwise a different cpu could invalidate the memory if it
reuses the freed memory in the meantime, iiuc.  IOW: its not a compiler
issue but a cpu issue.

Or am I still confused?

   


The sequence of operations is:

v = p-v;
f();
// rmb() ?
g(v);

You are worried that the compiler or cpu will fetch p-v after f() has 
executed?  The compiler may not, since it can't tell whether f() might 
change p-v.  If f() can cause another agent to write to p (by freeing 
it to a global list, for example), then it is its responsibility to 
issue the smp_rmb(), otherwise no calculation that took place before f() 
and accessed p is safe.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-07 Thread Gregory Haskins
Avi Kivity wrote:
 On 10/06/2009 09:40 PM, Gregory Haskins wrote:
 Thinking about this some more over lunch, I think we (Avi and I) might
 both be wrong (and David is right).  Avi is right that we don't need
 rmb() or barrier() for the reasons already stated, but I think David is
 right that we need an smp_mb() to ensure the cpu doesn't do the
 reordering.  Otherwise a different cpu could invalidate the memory if it
 reuses the freed memory in the meantime, iiuc.  IOW: its not a compiler
 issue but a cpu issue.

 Or am I still confused?


 
 The sequence of operations is:
 
 v = p-v;
 f();
 // rmb() ?
 g(v);
 
 You are worried that the compiler

No

 or cpu will fetch p-v after f() has executed?

Yes.

 The compiler may not, since it can't tell whether f() might
 change p-v.

Right, you were correct to say my barrier() suggestion was wrong.

 If f() can cause another agent to write to p (by freeing
 it to a global list, for example), then it is its responsibility to
 issue the smp_rmb(), otherwise no calculation that took place before f()
 and accessed p is safe.
 

IOW: David is right.  You need a cpu-barrier one way or the other.  We
can either allow -release() to imply one (and probably document it that
way, like we did for slow-work), or we can be explicit.  I chose to be
explicit since it is kind of self-documenting, and there is no need to
be worried about performance since the release is slow-path.

OTOH: If you feel strongly about it, we can take it out, knowing that
most anything the properly invalidates the memory will likely include an
implicit barrier of some kind.

Kind Regards,
-Greg




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Avi Kivity

On 10/06/2009 01:57 AM, Gregory Haskins wrote:

Avi Kivity wrote:
   

On 10/02/2009 10:19 PM, Gregory Haskins wrote:
 

What: xinterface is a mechanism that allows kernel modules external to
the kvm.ko proper to interface with a running guest.  It accomplishes
this by creating an abstracted interface which does not expose any
private details of the guest or its related KVM structures, and provides
a mechanism to find and bind to this interface at run-time.

   

If this is needed, it should be done as a virt_address_space to which
kvm and other modules bind, instead of as something that kvm exports and
other modules import.  The virt_address_space can be identified by an fd
and passed around to kvm and other modules.
 

IIUC, what you are proposing is something similar to generalizing the
vbus::memctx object.  I had considered doing something like that in the
early design phase of vbus, but then decided it would be a hard-sell to
the mm crowd, and difficult to generalize.

What do you propose as the interface to program the object?
   


Something like the current kvm interfaces, de-warted.  It will be a hard 
sell indeed, for good reasons.



So, under my suggestion above, you'd call
sys_create_virt_address_space(), populate it, and pass the result to kvm
and to foo.  This allows the use of virt_address_space without kvm and
doesn't require foo to interact with kvm.
 

The problem I see here is that the only way I can think to implement
this generally is something that looks very kvm-esque (slots-to-pages
kind of translation).  Is there a way you can think of that does not
involve a kvm.ko originated vtable that is also not kvm centric?
   


slots would be one implementation, if you can think of others then you'd 
add them.


If you can't, I think it indicates that the whole thing isn't necessary 
and we're better off with slots and virtual memory.  The only thing 
missing is dma, which you don't deal with anyway.



+struct kvm_xinterface_ops {
+unsigned long (*copy_to)(struct kvm_xinterface *intf,
+ unsigned long gpa, const void *src,
+ unsigned long len);
+unsigned long (*copy_from)(struct kvm_xinterface *intf, void *dst,
+   unsigned long gpa, unsigned long len);
+struct kvm_xvmap* (*vmap)(struct kvm_xinterface *intf,
+  unsigned long gpa,
+  unsigned long len);

   

How would vmap() work with live migration?
 

vmap represents shmem regions, and is a per-guest-instance resource.  So
my plan there is that the new and old guest instance would each have the
vmap region instated at the same GPA location (assumption: gpas are
stable across migration), and any state relevant data local to the shmem
(like ring head/tail position) is conveyed in the serialized stream for
the device model.
   


You'd have to copy the entire range since you don't know what the guest 
might put there.  I guess it's acceptable for small areas.



+
+static inline void
+_kvm_xinterface_release(struct kref *kref)
+{
+struct kvm_xinterface *intf;
+struct module *owner;
+
+intf = container_of(kref, struct kvm_xinterface, kref);
+
+owner = intf-owner;
+rmb();

   

Why rmb?
 

the intf-ops-release() line may invalidate the intf pointer, so we
want to ensure that the read completes before the release() is called.

TBH: I'm not 100% its needed, but I was being conservative.
   


rmb()s are only needed if an external agent can issue writes, otherwise 
you'd need one after every statement.







A simple per-vcpu cache (in struct kvm_vcpu) is likely to give better
results.
 

per-vcpu will not work well here, unfortunately, since this is an
external interface mechanism.  The callers will generally be from a
kthread or some other non-vcpu related context.  Even if we could figure
out a vcpu to use as a basis, we would require some kind of
heavier-weight synchronization which would not be as desirable.

Therefore, I opted to go per-cpu and use the presumably lighterweight
get_cpu/put_cpu() instead.
   


This just assumes a low context switch rate.

How about a gfn_to_pfn_cached(..., struct gfn_to_pfn_cache *cache)?  
Each user can place it in a natural place.



+static unsigned long
+xinterface_copy_to(struct kvm_xinterface *intf, unsigned long gpa,
+   const void *src, unsigned long n)
+{
+struct _xinterface *_intf = to_intf(intf);
+unsigned long dst;
+bool kthread = !current-mm;
+
+down_read(_intf-kvm-slots_lock);
+
+dst = gpa_to_hva(_intf, gpa);
+if (!dst)
+goto out;
+
+if (kthread)
+use_mm(_intf-mm);
+
+if (kthread || _intf-mm == current-mm)
+n = copy_to_user((void *)dst, src, n);
+else
+n = _slow_copy_to_user(_intf, dst, src, n);

   

Can't you switch the mm temporarily instead of this?
 

Thats actually what I do for the fast-path (use_mm() does a switch_to()
internally).

The slow-path is only 

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Gregory Haskins
Avi Kivity wrote:
 On 10/06/2009 01:57 AM, Gregory Haskins wrote:
 Avi Kivity wrote:
   
 On 10/02/2009 10:19 PM, Gregory Haskins wrote:
 
 What: xinterface is a mechanism that allows kernel modules external to
 the kvm.ko proper to interface with a running guest.  It accomplishes
 this by creating an abstracted interface which does not expose any
 private details of the guest or its related KVM structures, and
 provides
 a mechanism to find and bind to this interface at run-time.


 If this is needed, it should be done as a virt_address_space to which
 kvm and other modules bind, instead of as something that kvm exports and
 other modules import.  The virt_address_space can be identified by an fd
 and passed around to kvm and other modules.
  
 IIUC, what you are proposing is something similar to generalizing the
 vbus::memctx object.  I had considered doing something like that in the
 early design phase of vbus, but then decided it would be a hard-sell to
 the mm crowd, and difficult to generalize.

 What do you propose as the interface to program the object?

 
 Something like the current kvm interfaces, de-warted.  It will be a hard
 sell indeed, for good reasons.

I am not convinced generalizing this at this point is the best step
forward, since we only have a KVM client.  Let me put some more thought
into it.


 
 So, under my suggestion above, you'd call
 sys_create_virt_address_space(), populate it, and pass the result to kvm
 and to foo.  This allows the use of virt_address_space without kvm and
 doesn't require foo to interact with kvm.
  
 The problem I see here is that the only way I can think to implement
 this generally is something that looks very kvm-esque (slots-to-pages
 kind of translation).  Is there a way you can think of that does not
 involve a kvm.ko originated vtable that is also not kvm centric?

 
 slots would be one implementation, if you can think of others then you'd
 add them.

I'm more interested in *how* you'd add them more than if we would add
them.  What I am getting at are the logistics of such a beast.

For instance, would I have /dev/slots-vas with ioctls for adding slots,
and /dev/foo-vas for adding foos?  And each one would instantiate a
different vas_struct object with its own vas_struct-ops?  Or were you
thinking of something different.

 
 If you can't, I think it indicates that the whole thing isn't necessary
 and we're better off with slots and virtual memory.

I'm not sure if we are talking about the same thing yet, but if we are,
there are uses of a generalized interface outside of slots/virtual
memory (Ira's physical box being a good example).

In any case, I think the best approach is what I already proposed.
KVM's arrangement of memory is going to tend to be KVM specific, and
what better place to implement the interface than close to the kvm.ko core.

 The only thing missing is dma, which you don't deal with anyway.
 

Afaict I do support dma in the generalized vbus::memctx, though I do not
use it on anything related to KVM or xinterface.  Can you elaborate on
the problem here?  Does the SG interface in 4/4 help get us closer to
what you envision?

 +struct kvm_xinterface_ops {
 +unsigned long (*copy_to)(struct kvm_xinterface *intf,
 + unsigned long gpa, const void *src,
 + unsigned long len);
 +unsigned long (*copy_from)(struct kvm_xinterface *intf, void *dst,
 +   unsigned long gpa, unsigned long len);
 +struct kvm_xvmap* (*vmap)(struct kvm_xinterface *intf,
 +  unsigned long gpa,
 +  unsigned long len);


 How would vmap() work with live migration?
  
 vmap represents shmem regions, and is a per-guest-instance resource.  So
 my plan there is that the new and old guest instance would each have the
 vmap region instated at the same GPA location (assumption: gpas are
 stable across migration), and any state relevant data local to the shmem
 (like ring head/tail position) is conveyed in the serialized stream for
 the device model.

 
 You'd have to copy the entire range since you don't know what the guest
 might put there.  I guess it's acceptable for small areas.

?  The vmap is presumably part of an ABI between guest and host, so the
host should always know what structure is present within the region, and
what is relevant from within that structure to migrate once that state
is frozen.

These regions (for vbus, anyway) equate to things like virtqueue
metadata, and presumably the same problem exists for virtio-net in
userspace as it does here, since that is another form of a vmap.  So
whatever solution works for virtio-net migrating its virtqueues in
userspace should be close to what will work here.  The primary
difference is the location of the serializer.

 
 +
 +static inline void
 +_kvm_xinterface_release(struct kref *kref)
 +{
 +struct kvm_xinterface *intf;
 +struct module *owner;
 +
 +intf = 

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Gregory Haskins
Gregory Haskins wrote:
 Avi Kivity wrote:
 On 10/06/2009 01:57 AM, Gregory Haskins wrote:
 Avi Kivity wrote:
   
 On 10/02/2009 10:19 PM, Gregory Haskins wrote:

 +
 +static inline void
 +_kvm_xinterface_release(struct kref *kref)
 +{
 +struct kvm_xinterface *intf;
 +struct module *owner;
 +
 +intf = container_of(kref, struct kvm_xinterface, kref);
 +
 +owner = intf-owner;
 +rmb();


 Why rmb?
  
 the intf-ops-release() line may invalidate the intf pointer, so we
 want to ensure that the read completes before the release() is called.

 TBH: I'm not 100% its needed, but I was being conservative.

 rmb()s are only needed if an external agent can issue writes, otherwise
 you'd need one after every statement.
 
 I was following lessons learned here:
 
 http://lkml.org/lkml/2009/7/7/175
 
 Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
 David Howells in case he has more insight.

BTW: In case it is not clear, the rationale as I understand it is we
worry about the case where one cpu reorders the read to be after the
-release(), and another cpu might grab the memory that was kfree()'d
within the -release() and scribble something else on it before the read
completes.

I know rmb() typically needs to be paired with wmb() to be correct, so
you are probably right to say that the rmb() itself is not appropriate.
 This problem in general makes my head hurt, which is why I said I am
not 100% sure of what is required.  As David mentions, perhaps
smp_mb() is more appropriate for this application.  I also speculate
barrier() may be all that we need.

Kind Regards,
-Greg



signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Avi Kivity

On 10/06/2009 03:31 PM, Gregory Haskins wrote:



slots would be one implementation, if you can think of others then you'd
add them.
 

I'm more interested in *how* you'd add them more than if we would add
them.  What I am getting at are the logistics of such a beast.
   


Add alternative ioctls, or have one ioctl with a 'type' field.


For instance, would I have /dev/slots-vas with ioctls for adding slots,
and /dev/foo-vas for adding foos?  And each one would instantiate a
different vas_struct object with its own vas_struct-ops?  Or were you
thinking of something different.
   


I think a single /dev/foo is sufficient, unless some of those address 
spaces are backed by real devices.



If you can't, I think it indicates that the whole thing isn't necessary
and we're better off with slots and virtual memory.
 

I'm not sure if we are talking about the same thing yet, but if we are,
there are uses of a generalized interface outside of slots/virtual
memory (Ira's physical box being a good example).
   


I'm not sure Ira's case is not best supported by virtual memory.


In any case, I think the best approach is what I already proposed.
KVM's arrangement of memory is going to tend to be KVM specific, and
what better place to implement the interface than close to the kvm.ko core.

   

The only thing missing is dma, which you don't deal with anyway.

 

Afaict I do support dma in the generalized vbus::memctx, though I do not
use it on anything related to KVM or xinterface.  Can you elaborate on
the problem here?  Does the SG interface in 4/4 help get us closer to
what you envision?
   


There's no dma method in there.  Mapping to physical addresses is 
equivalent to get_user_pages(), so it doesn't add anything over virtual 
memory slots.



You'd have to copy the entire range since you don't know what the guest
might put there.  I guess it's acceptable for small areas.
 

?  The vmap is presumably part of an ABI between guest and host, so the
host should always know what structure is present within the region, and
what is relevant from within that structure to migrate once that state
is frozen.
   


I was thinking of the case where the page is shared with some other 
(guest-private) data.  But dirtying that data would be tracked by kvm, 
so it isn't a problem.



These regions (for vbus, anyway) equate to things like virtqueue
metadata, and presumably the same problem exists for virtio-net in
userspace as it does here, since that is another form of a vmap.  So
whatever solution works for virtio-net migrating its virtqueues in
userspace should be close to what will work here.  The primary
difference is the location of the serializer.
   


Actually, virtio doesn't serialize this data, instead it marks the page 
dirty and lets normal RAM migration move it.




rmb()s are only needed if an external agent can issue writes, otherwise
you'd need one after every statement.
 

I was following lessons learned here:

http://lkml.org/lkml/2009/7/7/175

Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
David Howells in case he has more insight.
   


I'm not sure I understand the reference.  Callers and callees don't need 
memory barriers since they're guaranteed program order.  You only need 
memory barriers when you have an external agent (a device, or another 
cpu).  What external agent can touch things during -release()?



A simple per-vcpu cache (in struct kvm_vcpu) is likely to give better
results.

 

per-vcpu will not work well here, unfortunately, since this is an
external interface mechanism.  The callers will generally be from a
kthread or some other non-vcpu related context.  Even if we could figure
out a vcpu to use as a basis, we would require some kind of
heavier-weight synchronization which would not be as desirable.

Therefore, I opted to go per-cpu and use the presumably lighterweight
get_cpu/put_cpu() instead.

   

This just assumes a low context switch rate.
 

It primarily assumes a low _migration_ rate, since you do not typically
have two contexts on the same cpu pounding on the memslots.


Why not?  If you're overcommitted, you will, especially if you have 
multiple guests doing request/reply interaction (perhaps with each other).



And even if
you did, there's a good chance for locality between the threads, since
the IO activity is likely related.  For the odd times where locality
fails to yield a hit, the time-slice or migration rate should be
sufficiently infrequent enough to still yield high 90s hit rates for
when it matters.  For low-volume, the lookup is in the noise so I am not
as concerned.

IOW: Where the lookup hurts the most is trying to walk an SG list, since
each pointer is usually within the same slot.  For this case, at least,
this cache helps immensely, at least according to profiles.
   


I'm wary of adding per-cpu data where it's easier to have a per-caller 
or per-vcpu cache.



How about a gfn_to_pfn_cached(..., 

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Avi Kivity

On 10/06/2009 04:22 PM, Gregory Haskins wrote:

+
+static inline void
+_kvm_xinterface_release(struct kref *kref)
+{
+struct kvm_xinterface *intf;
+struct module *owner;
+
+intf = container_of(kref, struct kvm_xinterface, kref);
+
+owner = intf-owner;
+rmb();


 

Why rmb?

   

the intf-ops-release() line may invalidate the intf pointer, so we
want to ensure that the read completes before the release() is called.

TBH: I'm not 100% its needed, but I was being conservative.

 

rmb()s are only needed if an external agent can issue writes, otherwise
you'd need one after every statement.
   

I was following lessons learned here:

http://lkml.org/lkml/2009/7/7/175

Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
David Howells in case he has more insight.
 

BTW: In case it is not clear, the rationale as I understand it is we
worry about the case where one cpu reorders the read to be after the
-release(), and another cpu might grab the memory that was kfree()'d
within the -release() and scribble something else on it before the read
completes.

I know rmb() typically needs to be paired with wmb() to be correct, so
you are probably right to say that the rmb() itself is not appropriate.
  This problem in general makes my head hurt, which is why I said I am
not 100% sure of what is required.  As David mentions, perhaps
smp_mb() is more appropriate for this application.  I also speculate
barrier() may be all that we need.
   


barrier() is the operation for a compiler barrier.  But it's unneeded 
here - unless the compiler can prove that -release(intf) will not 
modify intf-owner it is not allowed to move the access afterwards.  An 
indirect function call is generally a barrier() since the compiler can't 
assume memory has not been modified.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Gregory Haskins
Avi Kivity wrote:
 On 10/06/2009 03:31 PM, Gregory Haskins wrote:

 slots would be one implementation, if you can think of others then you'd
 add them.
  
 I'm more interested in *how* you'd add them more than if we would add
 them.  What I am getting at are the logistics of such a beast.

 
 Add alternative ioctls, or have one ioctl with a 'type' field.
 
 For instance, would I have /dev/slots-vas with ioctls for adding slots,
 and /dev/foo-vas for adding foos?  And each one would instantiate a
 different vas_struct object with its own vas_struct-ops?  Or were you
 thinking of something different.

 
 I think a single /dev/foo is sufficient, unless some of those address
 spaces are backed by real devices.
 
 If you can't, I think it indicates that the whole thing isn't necessary
 and we're better off with slots and virtual memory.
  
 I'm not sure if we are talking about the same thing yet, but if we are,
 there are uses of a generalized interface outside of slots/virtual
 memory (Ira's physical box being a good example).

 
 I'm not sure Ira's case is not best supported by virtual memory.

Perhaps, but there are surely some cases where the memory is not
pageable, but is accessible indirectly through some DMA controller.  So
if we require it to be pagable we will limit the utility of the
interface, though admittedly it will probably cover most cases.

 
 In any case, I think the best approach is what I already proposed.
 KVM's arrangement of memory is going to tend to be KVM specific, and
 what better place to implement the interface than close to the kvm.ko
 core.

   
 The only thing missing is dma, which you don't deal with anyway.

  
 Afaict I do support dma in the generalized vbus::memctx, though I do not
 use it on anything related to KVM or xinterface.  Can you elaborate on
 the problem here?  Does the SG interface in 4/4 help get us closer to
 what you envision?

 
 There's no dma method in there.  Mapping to physical addresses is
 equivalent to get_user_pages(), so it doesn't add anything over virtual
 memory slots.

I am not following you at all.  What kind of interface do we need for
DMA?  Since the KVM guest is pagable, the support for DMA would come
from building a scatterlist (patch 4/4) and passing the resulting pages
out using standard sg mechanisms, like sg_dma_address().  This is what I
do today to support zero-copy in AlacrityVM.

What more do we need?

 
 You'd have to copy the entire range since you don't know what the guest
 might put there.  I guess it's acceptable for small areas.
  
 ?  The vmap is presumably part of an ABI between guest and host, so the
 host should always know what structure is present within the region, and
 what is relevant from within that structure to migrate once that state
 is frozen.

 
 I was thinking of the case where the page is shared with some other
 (guest-private) data.  But dirtying that data would be tracked by kvm,
 so it isn't a problem.

Ok.

 
 These regions (for vbus, anyway) equate to things like virtqueue
 metadata, and presumably the same problem exists for virtio-net in
 userspace as it does here, since that is another form of a vmap.  So
 whatever solution works for virtio-net migrating its virtqueues in
 userspace should be close to what will work here.  The primary
 difference is the location of the serializer.

 
 Actually, virtio doesn't serialize this data, instead it marks the page
 dirty and lets normal RAM migration move it.

Ok, so its effectively serializing the entire region indirectly.  That
works too.

 
 
 rmb()s are only needed if an external agent can issue writes, otherwise
 you'd need one after every statement.
  
 I was following lessons learned here:

 http://lkml.org/lkml/2009/7/7/175

 Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
 David Howells in case he has more insight.

 
 I'm not sure I understand the reference.  Callers and callees don't need
 memory barriers since they're guaranteed program order.  You only need
 memory barriers when you have an external agent (a device, or another
 cpu).  What external agent can touch things during -release()?

We already have a different subthread started on this, so I'll pick this
up there.

 
 A simple per-vcpu cache (in struct kvm_vcpu) is likely to give better
 results.

  
 per-vcpu will not work well here, unfortunately, since this is an
 external interface mechanism.  The callers will generally be from a
 kthread or some other non-vcpu related context.  Even if we could
 figure
 out a vcpu to use as a basis, we would require some kind of
 heavier-weight synchronization which would not be as desirable.

 Therefore, I opted to go per-cpu and use the presumably lighterweight
 get_cpu/put_cpu() instead.


 This just assumes a low context switch rate.
  
 It primarily assumes a low _migration_ rate, since you do not typically
 have two contexts on the same cpu pounding on the 

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Gregory Haskins
Avi Kivity wrote:
 On 10/06/2009 04:22 PM, Gregory Haskins wrote:
 +
 +static inline void
 +_kvm_xinterface_release(struct kref *kref)
 +{
 +struct kvm_xinterface *intf;
 +struct module *owner;
 +
 +intf = container_of(kref, struct kvm_xinterface, kref);
 +
 +owner = intf-owner;
 +rmb();


  
 Why rmb?


 the intf-ops-release() line may invalidate the intf pointer, so we
 want to ensure that the read completes before the release() is called.

 TBH: I'm not 100% its needed, but I was being conservative.

  
 rmb()s are only needed if an external agent can issue writes, otherwise
 you'd need one after every statement.

 I was following lessons learned here:

 http://lkml.org/lkml/2009/7/7/175

 Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
 David Howells in case he has more insight.
  
 BTW: In case it is not clear, the rationale as I understand it is we
 worry about the case where one cpu reorders the read to be after the
 -release(), and another cpu might grab the memory that was kfree()'d
 within the -release() and scribble something else on it before the read
 completes.

 I know rmb() typically needs to be paired with wmb() to be correct, so
 you are probably right to say that the rmb() itself is not appropriate.
   This problem in general makes my head hurt, which is why I said I am
 not 100% sure of what is required.  As David mentions, perhaps
 smp_mb() is more appropriate for this application.  I also speculate
 barrier() may be all that we need.

 
 barrier() is the operation for a compiler barrier.  But it's unneeded
 here - unless the compiler can prove that -release(intf) will not
 modify intf-owner it is not allowed to move the access afterwards.  An
 indirect function call is generally a barrier() since the compiler can't
 assume memory has not been modified.
 

You're logic seems reasonable to me.  I will defer to David, since he
brought up the issue with the similar logic originally.

Kind Regards,
-Greg



signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Gregory Haskins
Gregory Haskins wrote:
 Avi Kivity wrote:
 On 10/06/2009 04:22 PM, Gregory Haskins wrote:
 +
 +static inline void
 +_kvm_xinterface_release(struct kref *kref)
 +{
 +struct kvm_xinterface *intf;
 +struct module *owner;
 +
 +intf = container_of(kref, struct kvm_xinterface, kref);
 +
 +owner = intf-owner;
 +rmb();


  
 Why rmb?


 the intf-ops-release() line may invalidate the intf pointer, so we
 want to ensure that the read completes before the release() is called.

 TBH: I'm not 100% its needed, but I was being conservative.

  
 rmb()s are only needed if an external agent can issue writes, otherwise
 you'd need one after every statement.

 I was following lessons learned here:

 http://lkml.org/lkml/2009/7/7/175

 Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
 David Howells in case he has more insight.
  
 BTW: In case it is not clear, the rationale as I understand it is we
 worry about the case where one cpu reorders the read to be after the
 -release(), and another cpu might grab the memory that was kfree()'d
 within the -release() and scribble something else on it before the read
 completes.

 I know rmb() typically needs to be paired with wmb() to be correct, so
 you are probably right to say that the rmb() itself is not appropriate.
   This problem in general makes my head hurt, which is why I said I am
 not 100% sure of what is required.  As David mentions, perhaps
 smp_mb() is more appropriate for this application.  I also speculate
 barrier() may be all that we need.

 barrier() is the operation for a compiler barrier.  But it's unneeded
 here - unless the compiler can prove that -release(intf) will not
 modify intf-owner it is not allowed to move the access afterwards.  An
 indirect function call is generally a barrier() since the compiler can't
 assume memory has not been modified.

 
 You're logic

gak.  or your logic even.

 seems reasonable to me.  I will defer to David, since he
 brought up the issue with the similar logic originally.
 
 Kind Regards,
 -Greg
 




signature.asc
Description: OpenPGP digital signature


Re: [Alacrityvm-devel] [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Ira W. Snyder
On Tue, Oct 06, 2009 at 12:58:06PM -0400, Gregory Haskins wrote:
 Avi Kivity wrote:
  On 10/06/2009 03:31 PM, Gregory Haskins wrote:
 
  slots would be one implementation, if you can think of others then you'd
  add them.
   
  I'm more interested in *how* you'd add them more than if we would add
  them.  What I am getting at are the logistics of such a beast.
 
  
  Add alternative ioctls, or have one ioctl with a 'type' field.
  
  For instance, would I have /dev/slots-vas with ioctls for adding slots,
  and /dev/foo-vas for adding foos?  And each one would instantiate a
  different vas_struct object with its own vas_struct-ops?  Or were you
  thinking of something different.
 
  
  I think a single /dev/foo is sufficient, unless some of those address
  spaces are backed by real devices.
  
  If you can't, I think it indicates that the whole thing isn't necessary
  and we're better off with slots and virtual memory.
   
  I'm not sure if we are talking about the same thing yet, but if we are,
  there are uses of a generalized interface outside of slots/virtual
  memory (Ira's physical box being a good example).
 
  
  I'm not sure Ira's case is not best supported by virtual memory.
 
 Perhaps, but there are surely some cases where the memory is not
 pageable, but is accessible indirectly through some DMA controller.  So
 if we require it to be pagable we will limit the utility of the
 interface, though admittedly it will probably cover most cases.
 

The limitation I have is that memory made available from the host system
(PCI card) as PCI BAR1 must not be migrated around in memory. I can only
change the address decoding to hit a specific physical address. AFAIK,
this means it cannot be userspace memory (since the underlying physical
page could change, or it could be in swap), and must be allocated with
something like __get_free_pages() or dma_alloc_coherent().

This is how all 83xx powerpc boards work, and I'd bet that the 85xx and
86xx boards work almost exactly the same. I can't say anything about
non-powerpc boards.

Ira
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Gregory Haskins
Gregory Haskins wrote:
 Gregory Haskins wrote:
 Avi Kivity wrote:
 On 10/06/2009 04:22 PM, Gregory Haskins wrote:
 +
 +static inline void
 +_kvm_xinterface_release(struct kref *kref)
 +{
 +struct kvm_xinterface *intf;
 +struct module *owner;
 +
 +intf = container_of(kref, struct kvm_xinterface, kref);
 +
 +owner = intf-owner;
 +rmb();


  
 Why rmb?


 the intf-ops-release() line may invalidate the intf pointer, so we
 want to ensure that the read completes before the release() is called.

 TBH: I'm not 100% its needed, but I was being conservative.

  
 rmb()s are only needed if an external agent can issue writes, otherwise
 you'd need one after every statement.

 I was following lessons learned here:

 http://lkml.org/lkml/2009/7/7/175

 Perhaps mb() or barrier() are more appropriate than rmb()?  I'm CC'ing
 David Howells in case he has more insight.
  
 BTW: In case it is not clear, the rationale as I understand it is we
 worry about the case where one cpu reorders the read to be after the
 -release(), and another cpu might grab the memory that was kfree()'d
 within the -release() and scribble something else on it before the read
 completes.

 I know rmb() typically needs to be paired with wmb() to be correct, so
 you are probably right to say that the rmb() itself is not appropriate.
   This problem in general makes my head hurt, which is why I said I am
 not 100% sure of what is required.  As David mentions, perhaps
 smp_mb() is more appropriate for this application.  I also speculate
 barrier() may be all that we need.

 barrier() is the operation for a compiler barrier.  But it's unneeded
 here - unless the compiler can prove that -release(intf) will not
 modify intf-owner it is not allowed to move the access afterwards.  An
 indirect function call is generally a barrier() since the compiler can't
 assume memory has not been modified.

 You're logic
 
 gak.  or your logic even.
 
 seems reasonable to me.  I will defer to David, since he
 brought up the issue with the similar logic originally.

 Kind Regards,
 -Greg

 
 

Thinking about this some more over lunch, I think we (Avi and I) might
both be wrong (and David is right).  Avi is right that we don't need
rmb() or barrier() for the reasons already stated, but I think David is
right that we need an smp_mb() to ensure the cpu doesn't do the
reordering.  Otherwise a different cpu could invalidate the memory if it
reuses the freed memory in the meantime, iiuc.  IOW: its not a compiler
issue but a cpu issue.

Or am I still confused?

-Greg



signature.asc
Description: OpenPGP digital signature


Re: [Alacrityvm-devel] [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-06 Thread Amit Shah
On (Tue) Oct 06 2009 [11:18:59], Ira W. Snyder wrote:
 
 The limitation I have is that memory made available from the host system
 (PCI card) as PCI BAR1 must not be migrated around in memory. I can only
 change the address decoding to hit a specific physical address. AFAIK,
 this means it cannot be userspace memory (since the underlying physical
 page could change, or it could be in swap), and must be allocated with
 something like __get_free_pages() or dma_alloc_coherent().

If the dma area is just one page in size, that page can be pinned by the
host (via a hypercall by the guest). If it's more, there is a
reserve-ram patch by Andrea. That'll reserve some RAM area for the guest
that's 1-1 mapped in the host address space. There was a patch posted on
the list sometime back.

Amit
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-05 Thread Gregory Haskins
Hi Marcelo!

Marcelo Tosatti wrote:
 On Fri, Oct 02, 2009 at 04:19:27PM -0400, Gregory Haskins wrote:
 What: xinterface is a mechanism that allows kernel modules external to
 the kvm.ko proper to interface with a running guest.  It accomplishes
 this by creating an abstracted interface which does not expose any
 private details of the guest or its related KVM structures, and provides
 a mechanism to find and bind to this interface at run-time.

 Why: There are various subsystems that would like to interact with a KVM
 guest which are ideally suited to exist outside the domain of the kvm.ko
 core logic. For instance, external pci-passthrough, virtual-bus, and
 virtio-net modules are currently under development.  In order for these
 modules to successfully interact with the guest, they need, at the very
 least, various interfaces for signaling IO events, pointer translation,
 and possibly memory mapping.

 The signaling case is covered by the recent introduction of the
 irqfd/ioeventfd mechanisms.  This patch provides a mechanism to cover the
 other cases.  Note that today we only expose pointer-translation related
 functions, but more could be added at a future date as needs arise.

 Example usage: QEMU instantiates a guest, and an external module foo
 that desires the ability to interface with the guest (say via
 open(/dev/foo)).  QEMU may then pass the kvmfd to foo via an
 ioctl, such as: ioctl(foofd, FOO_SET_VMID, kvmfd).  Upon receipt, the
 foo module can issue kvm_xinterface_bind(kvmfd) to acquire
 the proper context.  Internally, the struct kvm* and associated
 struct module* will remain pinned at least until the foo module calls
 kvm_xinterface_put().
 
 --- /dev/null
 +++ b/virt/kvm/xinterface.c
 @@ -0,0 +1,409 @@
 +/*
 + * KVM module interface - Allows external modules to interface with a guest
 + *
 + * Copyright 2009 Novell.  All Rights Reserved.
 + *
 + * Author:
 + *  Gregory Haskins ghask...@novell.com
 + *
 + * This file is free software; you can redistribute it and/or modify
 + * it under the terms of version 2 of the GNU General Public License
 + * as published by the Free Software Foundation.
 + *
 + * This program is distributed in the hope that it will be useful,
 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 + * GNU General Public License for more details.
 + *
 + * You should have received a copy of the GNU General Public License
 + * along with this program; if not, write to the Free Software Foundation,
 + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
 + */
 +
 +#include linux/mm.h
 +#include linux/vmalloc.h
 +#include linux/highmem.h
 +#include linux/module.h
 +#include linux/mmu_context.h
 +#include linux/kvm_host.h
 +#include linux/kvm_xinterface.h
 +
 +struct _xinterface {
 +struct kvm *kvm;
 +struct task_struct *task;
 +struct mm_struct   *mm;
 +struct kvm_xinterface   intf;
 +struct kvm_memory_slot *slotcache[NR_CPUS];
 +};
 +
 +struct _xvmap {
 +struct kvm_memory_slot*memslot;
 +unsigned long  npages;
 +struct kvm_xvmap   vmap;
 +};
 +
 +static struct _xinterface *
 +to_intf(struct kvm_xinterface *intf)
 +{
 +return container_of(intf, struct _xinterface, intf);
 +}
 +
 +#define _gfn_to_hva(gfn, memslot) \
 +(memslot-userspace_addr + (gfn - memslot-base_gfn) * PAGE_SIZE)
 +
 +/*
 + * gpa_to_hva() - translate a guest-physical to host-virtual using
 + * a per-cpu cache of the memslot.
 + *
 + * The gfn_to_memslot() call is relatively expensive, and the gpa access
 + * patterns exhibit a high degree of locality.  Therefore, lets cache
 + * the last slot used on a per-cpu basis to optimize the lookup
 + *
 + * assumes slots_lock held for read
 + */
 +static unsigned long
 +gpa_to_hva(struct _xinterface *_intf, unsigned long gpa)
 +{
 +int cpu = get_cpu();
 +unsigned long   gfn = gpa  PAGE_SHIFT;
 +struct kvm_memory_slot *memslot = _intf-slotcache[cpu];
 +unsigned long   addr= 0;
 +
 +if (!memslot
 +|| gfn  memslot-base_gfn
 +|| gfn = memslot-base_gfn + memslot-npages) {
 +
 +memslot = gfn_to_memslot(_intf-kvm, gfn);
 +if (!memslot)
 +goto out;
 +
 +_intf-slotcache[cpu] = memslot;
 +}
 +
 +addr = _gfn_to_hva(gfn, memslot) + offset_in_page(gpa);
 +
 +out:
 +put_cpu();
 +
 +return addr;
 
 Please optimize gfn_to_memslot() instead, so everybody benefits. It
 shows very often on profiles.

Yeah, its not a bad idea.  The reason why I did it here is because the
requirements for sync (kvm-vcpu) vs async (xinterface) access is
slightly different.  Sync is probably optimal with per-vcpu caching,
whereas async is optimal with per-cpu.

That said, we could probably build the entire algorithm to be per-cpu as
a compromise and still gain benefits.  

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-05 Thread Gregory Haskins
Avi Kivity wrote:
 On 10/02/2009 10:19 PM, Gregory Haskins wrote:
 What: xinterface is a mechanism that allows kernel modules external to
 the kvm.ko proper to interface with a running guest.  It accomplishes
 this by creating an abstracted interface which does not expose any
 private details of the guest or its related KVM structures, and provides
 a mechanism to find and bind to this interface at run-time.

 
 If this is needed, it should be done as a virt_address_space to which
 kvm and other modules bind, instead of as something that kvm exports and
 other modules import.  The virt_address_space can be identified by an fd
 and passed around to kvm and other modules.

IIUC, what you are proposing is something similar to generalizing the
vbus::memctx object.  I had considered doing something like that in the
early design phase of vbus, but then decided it would be a hard-sell to
the mm crowd, and difficult to generalize.

What do you propose as the interface to program the object?

 
 Why: There are various subsystems that would like to interact with a KVM
 guest which are ideally suited to exist outside the domain of the kvm.ko
 core logic. For instance, external pci-passthrough, virtual-bus, and
 virtio-net modules are currently under development.  In order for these
 modules to successfully interact with the guest, they need, at the very
 least, various interfaces for signaling IO events, pointer translation,
 and possibly memory mapping.

 The signaling case is covered by the recent introduction of the
 irqfd/ioeventfd mechanisms.  This patch provides a mechanism to cover the
 other cases.  Note that today we only expose pointer-translation related
 functions, but more could be added at a future date as needs arise.

 Example usage: QEMU instantiates a guest, and an external module foo
 that desires the ability to interface with the guest (say via
 open(/dev/foo)).  QEMU may then pass the kvmfd to foo via an
 ioctl, such as: ioctl(foofd, FOO_SET_VMID,kvmfd).  Upon receipt, the
 foo module can issue kvm_xinterface_bind(kvmfd) to acquire
 the proper context.  Internally, the struct kvm* and associated
 struct module* will remain pinned at least until the foo module calls
 kvm_xinterface_put().


 
 So, under my suggestion above, you'd call
 sys_create_virt_address_space(), populate it, and pass the result to kvm
 and to foo.  This allows the use of virt_address_space without kvm and
 doesn't require foo to interact with kvm.

The problem I see here is that the only way I can think to implement
this generally is something that looks very kvm-esque (slots-to-pages
kind of translation).  Is there a way you can think of that does not
involve a kvm.ko originated vtable that is also not kvm centric?

 
 +struct kvm_xinterface_ops {
 +unsigned long (*copy_to)(struct kvm_xinterface *intf,
 + unsigned long gpa, const void *src,
 + unsigned long len);
 +unsigned long (*copy_from)(struct kvm_xinterface *intf, void *dst,
 +   unsigned long gpa, unsigned long len);
 +struct kvm_xvmap* (*vmap)(struct kvm_xinterface *intf,
 +  unsigned long gpa,
 +  unsigned long len);

 
 How would vmap() work with live migration?

vmap represents shmem regions, and is a per-guest-instance resource.  So
my plan there is that the new and old guest instance would each have the
vmap region instated at the same GPA location (assumption: gpas are
stable across migration), and any state relevant data local to the shmem
(like ring head/tail position) is conveyed in the serialized stream for
the device model.

 
 +
 +static inline void
 +_kvm_xinterface_release(struct kref *kref)
 +{
 +struct kvm_xinterface *intf;
 +struct module *owner;
 +
 +intf = container_of(kref, struct kvm_xinterface, kref);
 +
 +owner = intf-owner;
 +rmb();

 
 Why rmb?

the intf-ops-release() line may invalidate the intf pointer, so we
want to ensure that the read completes before the release() is called.

TBH: I'm not 100% its needed, but I was being conservative.

 
 +
 +intf-ops-release(intf);
 +module_put(owner);
 +}
 +

 +
 +/*
 + * gpa_to_hva() - translate a guest-physical to host-virtual using
 + * a per-cpu cache of the memslot.
 + *
 + * The gfn_to_memslot() call is relatively expensive, and the gpa access
 + * patterns exhibit a high degree of locality.  Therefore, lets cache
 + * the last slot used on a per-cpu basis to optimize the lookup
 + *
 + * assumes slots_lock held for read
 + */
 +static unsigned long
 +gpa_to_hva(struct _xinterface *_intf, unsigned long gpa)
 +{
 +int cpu = get_cpu();
 +unsigned long   gfn = gpa  PAGE_SHIFT;
 +struct kvm_memory_slot *memslot = _intf-slotcache[cpu];
 +unsigned long   addr= 0;
 +
 +if (!memslot
 +|| gfn  memslot-base_gfn
 +|| gfn= memslot-base_gfn + memslot-npages) {
 +
 +memslot = 

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-04 Thread Avi Kivity

On 10/02/2009 10:19 PM, Gregory Haskins wrote:

What: xinterface is a mechanism that allows kernel modules external to
the kvm.ko proper to interface with a running guest.  It accomplishes
this by creating an abstracted interface which does not expose any
private details of the guest or its related KVM structures, and provides
a mechanism to find and bind to this interface at run-time.
   


If this is needed, it should be done as a virt_address_space to which 
kvm and other modules bind, instead of as something that kvm exports and 
other modules import.  The virt_address_space can be identified by an fd 
and passed around to kvm and other modules.



Why: There are various subsystems that would like to interact with a KVM
guest which are ideally suited to exist outside the domain of the kvm.ko
core logic. For instance, external pci-passthrough, virtual-bus, and
virtio-net modules are currently under development.  In order for these
modules to successfully interact with the guest, they need, at the very
least, various interfaces for signaling IO events, pointer translation,
and possibly memory mapping.

The signaling case is covered by the recent introduction of the
irqfd/ioeventfd mechanisms.  This patch provides a mechanism to cover the
other cases.  Note that today we only expose pointer-translation related
functions, but more could be added at a future date as needs arise.

Example usage: QEMU instantiates a guest, and an external module foo
that desires the ability to interface with the guest (say via
open(/dev/foo)).  QEMU may then pass the kvmfd to foo via an
ioctl, such as: ioctl(foofd, FOO_SET_VMID,kvmfd).  Upon receipt, the
foo module can issue kvm_xinterface_bind(kvmfd) to acquire
the proper context.  Internally, the struct kvm* and associated
struct module* will remain pinned at least until the foo module calls
kvm_xinterface_put().

   


So, under my suggestion above, you'd call 
sys_create_virt_address_space(), populate it, and pass the result to kvm 
and to foo.  This allows the use of virt_address_space without kvm and 
doesn't require foo to interact with kvm.



+struct kvm_xinterface_ops {
+   unsigned long (*copy_to)(struct kvm_xinterface *intf,
+unsigned long gpa, const void *src,
+unsigned long len);
+   unsigned long (*copy_from)(struct kvm_xinterface *intf, void *dst,
+  unsigned long gpa, unsigned long len);
+   struct kvm_xvmap* (*vmap)(struct kvm_xinterface *intf,
+ unsigned long gpa,
+ unsigned long len);
   


How would vmap() work with live migration?


+
+static inline void
+_kvm_xinterface_release(struct kref *kref)
+{
+   struct kvm_xinterface *intf;
+   struct module *owner;
+
+   intf = container_of(kref, struct kvm_xinterface, kref);
+
+   owner = intf-owner;
+   rmb();
   


Why rmb?


+
+   intf-ops-release(intf);
+   module_put(owner);
+}
+

+
+/*
+ * gpa_to_hva() - translate a guest-physical to host-virtual using
+ * a per-cpu cache of the memslot.
+ *
+ * The gfn_to_memslot() call is relatively expensive, and the gpa access
+ * patterns exhibit a high degree of locality.  Therefore, lets cache
+ * the last slot used on a per-cpu basis to optimize the lookup
+ *
+ * assumes slots_lock held for read
+ */
+static unsigned long
+gpa_to_hva(struct _xinterface *_intf, unsigned long gpa)
+{
+   int cpu = get_cpu();
+   unsigned long   gfn = gpa  PAGE_SHIFT;
+   struct kvm_memory_slot *memslot = _intf-slotcache[cpu];
+   unsigned long   addr= 0;
+
+   if (!memslot
+   || gfn  memslot-base_gfn
+   || gfn= memslot-base_gfn + memslot-npages) {
+
+   memslot = gfn_to_memslot(_intf-kvm, gfn);
+   if (!memslot)
+   goto out;
+
+   _intf-slotcache[cpu] = memslot;
+   }
+
+   addr = _gfn_to_hva(gfn, memslot) + offset_in_page(gpa);
+
+out:
+   put_cpu();
+
+   return addr;
+}
   



A simple per-vcpu cache (in struct kvm_vcpu) is likely to give better 
results.



+static unsigned long
+xinterface_copy_to(struct kvm_xinterface *intf, unsigned long gpa,
+  const void *src, unsigned long n)
+{
+   struct _xinterface *_intf = to_intf(intf);
+   unsigned long dst;
+   bool kthread = !current-mm;
+
+   down_read(_intf-kvm-slots_lock);
+
+   dst = gpa_to_hva(_intf, gpa);
+   if (!dst)
+   goto out;
+
+   if (kthread)
+   use_mm(_intf-mm);
+
+   if (kthread || _intf-mm == current-mm)
+   n = copy_to_user((void *)dst, src, n);
+   else
+   n = _slow_copy_to_user(_intf, dst, src, n);
   


Can't you switch the mm temporarily instead of this?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: 

Re: [PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-03 Thread Marcelo Tosatti
On Fri, Oct 02, 2009 at 04:19:27PM -0400, Gregory Haskins wrote:
 What: xinterface is a mechanism that allows kernel modules external to
 the kvm.ko proper to interface with a running guest.  It accomplishes
 this by creating an abstracted interface which does not expose any
 private details of the guest or its related KVM structures, and provides
 a mechanism to find and bind to this interface at run-time.
 
 Why: There are various subsystems that would like to interact with a KVM
 guest which are ideally suited to exist outside the domain of the kvm.ko
 core logic. For instance, external pci-passthrough, virtual-bus, and
 virtio-net modules are currently under development.  In order for these
 modules to successfully interact with the guest, they need, at the very
 least, various interfaces for signaling IO events, pointer translation,
 and possibly memory mapping.
 
 The signaling case is covered by the recent introduction of the
 irqfd/ioeventfd mechanisms.  This patch provides a mechanism to cover the
 other cases.  Note that today we only expose pointer-translation related
 functions, but more could be added at a future date as needs arise.
 
 Example usage: QEMU instantiates a guest, and an external module foo
 that desires the ability to interface with the guest (say via
 open(/dev/foo)).  QEMU may then pass the kvmfd to foo via an
 ioctl, such as: ioctl(foofd, FOO_SET_VMID, kvmfd).  Upon receipt, the
 foo module can issue kvm_xinterface_bind(kvmfd) to acquire
 the proper context.  Internally, the struct kvm* and associated
 struct module* will remain pinned at least until the foo module calls
 kvm_xinterface_put().

 --- /dev/null
 +++ b/virt/kvm/xinterface.c
 @@ -0,0 +1,409 @@
 +/*
 + * KVM module interface - Allows external modules to interface with a guest
 + *
 + * Copyright 2009 Novell.  All Rights Reserved.
 + *
 + * Author:
 + *  Gregory Haskins ghask...@novell.com
 + *
 + * This file is free software; you can redistribute it and/or modify
 + * it under the terms of version 2 of the GNU General Public License
 + * as published by the Free Software Foundation.
 + *
 + * This program is distributed in the hope that it will be useful,
 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 + * GNU General Public License for more details.
 + *
 + * You should have received a copy of the GNU General Public License
 + * along with this program; if not, write to the Free Software Foundation,
 + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
 + */
 +
 +#include linux/mm.h
 +#include linux/vmalloc.h
 +#include linux/highmem.h
 +#include linux/module.h
 +#include linux/mmu_context.h
 +#include linux/kvm_host.h
 +#include linux/kvm_xinterface.h
 +
 +struct _xinterface {
 + struct kvm *kvm;
 + struct task_struct *task;
 + struct mm_struct   *mm;
 + struct kvm_xinterface   intf;
 + struct kvm_memory_slot *slotcache[NR_CPUS];
 +};
 +
 +struct _xvmap {
 + struct kvm_memory_slot*memslot;
 + unsigned long  npages;
 + struct kvm_xvmap   vmap;
 +};
 +
 +static struct _xinterface *
 +to_intf(struct kvm_xinterface *intf)
 +{
 + return container_of(intf, struct _xinterface, intf);
 +}
 +
 +#define _gfn_to_hva(gfn, memslot) \
 + (memslot-userspace_addr + (gfn - memslot-base_gfn) * PAGE_SIZE)
 +
 +/*
 + * gpa_to_hva() - translate a guest-physical to host-virtual using
 + * a per-cpu cache of the memslot.
 + *
 + * The gfn_to_memslot() call is relatively expensive, and the gpa access
 + * patterns exhibit a high degree of locality.  Therefore, lets cache
 + * the last slot used on a per-cpu basis to optimize the lookup
 + *
 + * assumes slots_lock held for read
 + */
 +static unsigned long
 +gpa_to_hva(struct _xinterface *_intf, unsigned long gpa)
 +{
 + int cpu = get_cpu();
 + unsigned long   gfn = gpa  PAGE_SHIFT;
 + struct kvm_memory_slot *memslot = _intf-slotcache[cpu];
 + unsigned long   addr= 0;
 +
 + if (!memslot
 + || gfn  memslot-base_gfn
 + || gfn = memslot-base_gfn + memslot-npages) {
 +
 + memslot = gfn_to_memslot(_intf-kvm, gfn);
 + if (!memslot)
 + goto out;
 +
 + _intf-slotcache[cpu] = memslot;
 + }
 +
 + addr = _gfn_to_hva(gfn, memslot) + offset_in_page(gpa);
 +
 +out:
 + put_cpu();
 +
 + return addr;

Please optimize gfn_to_memslot() instead, so everybody benefits. It
shows very often on profiles.

 +
 + page_list = (struct page **) __get_free_page(GFP_KERNEL);
 + if (!page_list)
 + return NULL;
 +
 + down_write(mm-mmap_sem);
 +
 + ret = get_user_pages(p, mm, addr, npages, 1, 0, page_list, NULL);
 + if (ret  0)
 + goto out;
 +
 + ptr = vmap(page_list, npages, VM_MAP, PAGE_KERNEL);
 + if (ptr)
 + 

[PATCH v2 2/4] KVM: introduce xinterface API for external interaction with guests

2009-10-02 Thread Gregory Haskins
What: xinterface is a mechanism that allows kernel modules external to
the kvm.ko proper to interface with a running guest.  It accomplishes
this by creating an abstracted interface which does not expose any
private details of the guest or its related KVM structures, and provides
a mechanism to find and bind to this interface at run-time.

Why: There are various subsystems that would like to interact with a KVM
guest which are ideally suited to exist outside the domain of the kvm.ko
core logic. For instance, external pci-passthrough, virtual-bus, and
virtio-net modules are currently under development.  In order for these
modules to successfully interact with the guest, they need, at the very
least, various interfaces for signaling IO events, pointer translation,
and possibly memory mapping.

The signaling case is covered by the recent introduction of the
irqfd/ioeventfd mechanisms.  This patch provides a mechanism to cover the
other cases.  Note that today we only expose pointer-translation related
functions, but more could be added at a future date as needs arise.

Example usage: QEMU instantiates a guest, and an external module foo
that desires the ability to interface with the guest (say via
open(/dev/foo)).  QEMU may then pass the kvmfd to foo via an
ioctl, such as: ioctl(foofd, FOO_SET_VMID, kvmfd).  Upon receipt, the
foo module can issue kvm_xinterface_bind(kvmfd) to acquire
the proper context.  Internally, the struct kvm* and associated
struct module* will remain pinned at least until the foo module calls
kvm_xinterface_put().

Signed-off-by: Gregory Haskins ghask...@novell.com
---

 arch/x86/kvm/Makefile  |2 
 include/linux/kvm_host.h   |3 
 include/linux/kvm_xinterface.h |  114 +++
 kernel/fork.c  |1 
 virt/kvm/kvm_main.c|   24 ++
 virt/kvm/xinterface.c  |  409 
 6 files changed, 552 insertions(+), 1 deletions(-)
 create mode 100644 include/linux/kvm_xinterface.h
 create mode 100644 virt/kvm/xinterface.c

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 31a7035..0449d6e 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -7,7 +7,7 @@ CFLAGS_vmx.o := -I.
 
 kvm-y  += $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o \
coalesced_mmio.o irq_comm.o eventfd.o \
-   assigned-dev.o)
+   assigned-dev.o xinterface.o)
 kvm-$(CONFIG_IOMMU_API)+= $(addprefix ../../../virt/kvm/, iommu.o)
 
 kvm-y  += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index b985a29..7cc1afb 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -362,6 +362,9 @@ void kvm_arch_sync_events(struct kvm *kvm);
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
 
+struct kvm_xinterface *
+kvm_xinterface_alloc(struct kvm *kvm, struct module *owner);
+
 int kvm_is_mmio_pfn(pfn_t pfn);
 
 struct kvm_irq_ack_notifier {
diff --git a/include/linux/kvm_xinterface.h b/include/linux/kvm_xinterface.h
new file mode 100644
index 000..01f092b
--- /dev/null
+++ b/include/linux/kvm_xinterface.h
@@ -0,0 +1,114 @@
+#ifndef __KVM_XINTERFACE_H
+#define __KVM_XINTERFACE_H
+
+/*
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ */
+
+#include linux/kref.h
+#include linux/module.h
+#include linux/file.h
+
+struct kvm_xinterface;
+struct kvm_xvmap;
+
+struct kvm_xinterface_ops {
+   unsigned long (*copy_to)(struct kvm_xinterface *intf,
+unsigned long gpa, const void *src,
+unsigned long len);
+   unsigned long (*copy_from)(struct kvm_xinterface *intf, void *dst,
+  unsigned long gpa, unsigned long len);
+   struct kvm_xvmap* (*vmap)(struct kvm_xinterface *intf,
+ unsigned long gpa,
+ unsigned long len);
+   void (*release)(struct kvm_xinterface *);
+};
+
+struct kvm_xinterface {
+   struct module   *owner;
+   struct kref  kref;
+   const struct kvm_xinterface_ops *ops;
+};
+
+static inline void
+kvm_xinterface_get(struct kvm_xinterface *intf)
+{
+   kref_get(intf-kref);
+}
+
+static inline void
+_kvm_xinterface_release(struct kref *kref)
+{
+   struct kvm_xinterface *intf;
+   struct module *owner;
+
+   intf = container_of(kref, struct kvm_xinterface, kref);
+
+   owner = intf-owner;
+   rmb();
+
+   intf-ops-release(intf);
+   module_put(owner);
+}
+
+static inline void
+kvm_xinterface_put(struct kvm_xinterface *intf)
+{
+   kref_put(intf-kref, _kvm_xinterface_release);
+}
+
+struct kvm_xvmap_ops {
+   void (*release)(struct