(2011/12/26 8:35), Paul Mackerras wrote:
On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:
So if I read things correctly, this is the only case you're setting
pages as dirty. What if you have the following:
guest adds HTAB entry x
guest writes to page mapped by x
guest
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
In that sense, what do you think about the question I sent last week?
=== REPOST 1 ===
One alternative would be:
KVM_SWITCH_DIRTY_LOG passing the address of a bitmap. If the active
bitmap was clean, it returns 0, no switch performed. If the active
bitmap was dirty, the kernel switches to the new bitmap and returns 1.
And the responsability of cleaning the new bitmap could also
r = 0;
@@ -1195,11 +1232,16 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
gfn = unalias_gfn(kvm, gfn);
memslot = gfn_to_memslot_unaliased(kvm, gfn);
if (memslot memslot-dirty_bitmap) {
- unsigned long rel_gfn = gfn - memslot-base_gfn;
+
[To ppc people]
Hi, Benjamin, Paul, Alex,
Please see the patches 6,7/12. I first say sorry for that I've not tested these
yet. In that sense, these may not be in the quality for precise reviews. But I
will be happy if you would give me any comments.
Alex, could you help me? Though I have a
+static inline int set_bit_user_non_atomic(int nr, void __user *addr)
+{
+ u8 __user *p;
+ u8 val;
+
+ p = (u8 __user *)((unsigned long)addr + nr / BITS_PER_BYTE);
Does C do the + or the / first? Either way, I'd like to see brackets here :)
OK, I'll change like that! I
In usual workload, the number of dirty pages varies a lot for each
iteration
and we should gain really a lot for relatively clean cases.
Can you post such a test, for an idle large guest?
OK, I'll do!
Result of low workload test (running top during migration) first,
4GB guest
picked up
Yes, I'm just using in kernel space: qemu has its own endian related helpers.
So if you allow us to place this macro in asm-generic/bitops/* it will help us.
No problem at all then. Thanks for the explanation.
Acked-by: Arnd Bergmanna...@arndb.de
Thanks you both. I will add your Acked-by
(2010/05/06 22:38), Arnd Bergmann wrote:
On Wednesday 05 May 2010, Takuya Yoshikawa wrote:
Date:
Yesterday 04:59:24
That's why the bitmaps are defined as little endian u64 aligned, even on
big endian 32-bit systems. Little endian bitmaps are wordsize agnostic,
and u64 alignment ensures we can
get.org get.opt switch.opt
slots[7].len=32768 278379 66398 64024
slots[8].len=32768 181246 270 160
slots[7].len=32768 263961 64673 64494
slots[8].len=32768 181655 265 160
slots[7].len=32768 263736 64701 64610
slots[8].len=32768 182785 267 160
slots[7].len=32768 260925 65360 65042
(2010/05/11 12:43), Marcelo Tosatti wrote:
On Tue, May 04, 2010 at 10:08:21PM +0900, Takuya Yoshikawa wrote:
+How to Get
+
+Before calling this, you have to set the slot member of kvm_user_dirty_log
+to indicate the target memory slot.
+
+struct kvm_user_dirty_log {
+ __u32 slot
Hi, sorry for sending from my personal account.
The following series are all from me:
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
The 3rd version of moving dirty bitmaps to user space.
From this version, we add x86 and ppc and asm-generic people to CC lists.
[To KVM people
expect easily, the time needed to
allocate a bitmap is completely reduced. Furthermore, we can avoid the
tlb flush triggered by vmalloc() and get some good effects. In my test,
the improved ioctl was about 4 to 10 times faster than the original one
for clean slots.
Signed-off-by: Takuya Yoshikawa
before the get_dirty_log(). So we use this
timing to update is_dirty.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
CC: Avi Kivity a...@redhat.com
CC: Alexander Graf ag...@suse.de
---
arch/ia64/kvm/kvm-ia64.c | 11
We will change the vmalloc() and vfree() to do_mmap() and do_munmap() later.
This patch makes it easy and cleanup the code.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 27
During the work of KVM's dirty page logging optimization, we encountered
the need of copy_in_user() for 32-bit x86 and ppc: these will be used for
manipulating dirty bitmaps in user space.
So we implement copy_in_user() for 32-bit with existing generic copy user
helpers.
Signed-off-by: Takuya
: there is a one restriction to this macro: bitmaps must be 64-bit
aligned (see the comment in this patch).
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
CC: Avi Kivity a...@redhat.com
Cc: Thomas Gleixner t
During the work of KVM's dirty page logging optimization, we encountered
the need of copy_in_user() for 32-bit ppc and x86: these will be used for
manipulating dirty bitmaps in user space.
So we implement copy_in_user() for 32-bit with __copy_tofrom_user().
Signed-off-by: Takuya Yoshikawa
in which the author
implemented set_bit_to_user() locally using inefficient functions: see TODO
at the top of that.
Probably, this kind of need would be common for virtualization area.
So we introduce a function set_bit_user_non_atomic().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
and user space, we want to update the bitmaps in user space directly.
To achive this, le bit offset with *_user() functions help us a lot.
So let us use the le bit offset calculation part by defining it as a new
macro: generic_le_bit_offset() .
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
This is not to break the build for other architectures than x86 and ppc.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/ia64/include/asm/kvm_host.h|5 +
arch/powerpc/include/asm/kvm_host.h |6
much because it's using a different place to store dirty logs
rather than the dirty bitmaps of memory slots: all we have to change
are sync and get of dirty log, so we don't need set_bit_user like
functions for ia64.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off
the documentation in this patch for precise explanations.
About performance improvement: the most important feature of switch API
is the lightness. In our test, this appeared in the form of improved
responses for GUI manipulations.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
We use new API for light dirty log access if KVM supports it.
This conflicts with Marcelo's patches. So please take this as a sample patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
kvm/include/linux/kvm.h | 11 ++
qemu-kvm.c | 81
On Tue, 04 May 2010 19:08:23 +0300
Avi Kivity a...@redhat.com wrote:
On 05/04/2010 06:03 PM, Arnd Bergmann wrote:
On Tuesday 04 May 2010, Takuya Yoshikawa wrote:
...
So let us use the le bit offset calculation part by defining it as a new
macro: generic_le_bit_offset() .
Does
25 matches
Mail list logo