(2010/04/20 20:10), Alexander Graf wrote:

On 20.04.2010, at 13:02, Takuya Yoshikawa wrote:

We move dirty bitmaps to user space.

- Allocation and destruction: we use do_mmap() and do_munmap().
   The new bitmap space is twice longer than the original one and we
   use the additional space for double buffering: this makes it
   possible to update the active bitmap while letting the user space
   read the other one safely.

- Bitmap manipulations: we replace all functions which access dirty
   bitmaps to *_user() functions. Note that some of these should be
   optimized later.

- For ia64: moving the dirty bitmaps of memory slots does not effect
   much to ia64 because it's using a different space to store bitmaps
   which is directly updated: all we have to change are sync and get
   of dirty log, so we don't need set_bit_user like function for ia64.

Signed-off-by: Takuya Yoshikawa<yoshikawa.tak...@oss.ntt.co.jp>
Signed-off-by: Fernando Luis Vazquez Cao<ferna...@oss.ntt.co.jp>
---
arch/ia64/kvm/kvm-ia64.c  |   12 ++++-
arch/powerpc/kvm/book3s.c |    2 +-
arch/x86/kvm/x86.c        |   24 +++++-----
include/linux/kvm_host.h  |    5 +-
virt/kvm/kvm_main.c       |  101 ++++++++++++++++++++++++++++++++++++++++-----
5 files changed, 116 insertions(+), 28 deletions(-)



[...]

@@ -778,13 +803,45 @@ int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
}

int kvm_copy_dirty_bitmap(unsigned long __user *to,
-                         const unsigned long *from,
+                         const unsigned long __user *from,
                          unsigned long bytes)
{
-       if (copy_to_user(to, from, bytes))
+#if defined(CONFIG_X86_64) || defined(CONFIG_PPC64) || defined(CONFIG_IA64)

Wouldn't it be better to have a define __HAS_COPY_IN_USER or so? I really 
dislike writing out all architectures that have this explicitly.

Yes, I completely agree with you: actually, I am not determined to use this 
finally or not.
The best way is to make generic copy_in_user().

Anyway, yes, next time, I will define macro as you suggest!


+       if (copy_in_user(to, from, bytes)) {
+               printk(KERN_WARNING "%s: copy_in_user failed.\n", __func__);
                return -EFAULT;
+       }
+       return 0;
+#else

Worst case I'd at least like to see all the others written out here, so when 
someone adds a new arch he can at least check if it's supported or not.

+       int num, bufbytes;
+       unsigned long buf[32];

+       if (!access_ok(VERIFY_READ, from, bytes) ||
+           !access_ok(VERIFY_WRITE, to, bytes)) {
+               goto out_fault;
+       }
+
+       bufbytes = sizeof(buf);
+       num = bufbytes / sizeof(buf[0]);

ARRAY_SIZE?

OK, thanks!


+
+       for (; bytes>  bufbytes; bytes -= bufbytes, to += num, from += num) {
+               if (__copy_from_user(buf, from, bufbytes))
+                       goto out_fault;
+               if (__copy_to_user(to, buf, bufbytes))
+                       goto out_fault;
+       }
+       if (bytes>  0) {
+               if (__copy_from_user(buf, from, bytes))
+                       goto out_fault;
+               if (__copy_to_user(to, buf, bytes))
+                       goto out_fault;
+       }
        return 0;
+
+out_fault:
+       printk(KERN_WARNING "%s: copy to(from) user failed.\n", __func__);
+       return -EFAULT;
+#endif
}

int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
@@ -1194,13 +1251,35 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, 
unsigned long len)
}
EXPORT_SYMBOL_GPL(kvm_clear_guest);

+/*
+ * Please use generic *_user bitops once they become available.
+ * Be careful setting the bit won't be done atomically.
+ */
static int __mark_page_dirty(unsigned long nr,
-                            unsigned long *dirty_bitmap)
+                            unsigned long __user *dirty_bitmap)
{
+       unsigned long user_addr;
+       u8 val;
+
#ifdef __BIG_ENDIAN
        nr = nr ^ BITOP_LE_SWIZZLE;
#endif
-       __set_bit(nr, dirty_bitmap);
+       user_addr = (unsigned long)dirty_bitmap + nr / 8;

BITS_PER_BYTE

+       if (!access_ok(VERIFY_WRITE, user_addr, 1))
+               goto out_fault;
+
+       if (__get_user(val, (u8 __user *)user_addr))
+               goto out_fault;
+
+       val |= 1U<<  (nr % 8);

see above


Alex


--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to