Hi, does anybody knows about this?
Currently, dirty bitmap is updated by generic___set_le_bit().
I checked the git log and mail archives but could not find any
explanation why replacing set_bit() by generic___set_le_bit() is
safe.
Thanks,
Takuya
--
To unsubscribe from this list: send the
Avi Kivity wrote:
On 03/23/2010 08:12 AM, Takuya Yoshikawa wrote:
Hi, does anybody knows about this?
Currently, dirty bitmap is updated by generic___set_le_bit().
I checked the git log and mail archives but could not find any
explanation why replacing set_bit() by generic___set_le_bit
Hi, this is the first version!
We've first implemented the x86 specific parts without introducing
new APIs: so this code works with current qemu-kvm.
Although we have many things to do, we'd like to get some comments
to see we are going to the right direction.
Note: we are now testing this
We will use this later in other parts.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/powerpc/kvm/book3s.c |2 +-
arch/x86/kvm/x86.c|2 +-
include/linux/kvm_host.h |5 +
virt/kvm
For x86, we will change the allocation and free parts to do_mmap() and
do_munmap(). This patch makes it cleaner.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 27
-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |3 +++
include/linux/kvm_host.h|6 ++
2 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm
test_bit_user() to avoid extra set_bit.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c | 118 +
include/linux/kvm_host.h |4 ++
virt/kvm
(2010/04/12 2:08), Avi Kivity wrote:
On 04/09/2010 12:30 PM, Takuya Yoshikawa wrote:
This work is initially suggested by Avi Kivity for moving the
dirty bitmaps used by KVM to user space: This makes it possible
to manipulate the bitmaps from qemu without copying from KVM.
Note: We are now
(2010/04/12 2:12), Avi Kivity wrote:
On 04/09/2010 12:32 PM, Takuya Yoshikawa wrote:
We will use this later in other parts.
s/rapper/wrapper/...
Oh, my poor English, sorry.
+static inline int kvm_dirty_bitmap_bytes(struct kvm_memory_slot
*memslot)
+{
+ return ALIGN(memslot-npages
(2010/04/12 2:13), Avi Kivity wrote:
On 04/09/2010 12:34 PM, Takuya Yoshikawa wrote:
For x86, we will change the allocation and free parts to do_mmap() and
do_munmap(). This patch makes it cleaner.
Should be done for all architectures. I don't want different ways of
creating dirty bitmaps
(2010/04/12 2:15), Avi Kivity wrote:
On 04/09/2010 12:35 PM, Takuya Yoshikawa wrote:
Currently, x86 vmalloc()s a dirty bitmap every time when we swich
to the next dirty bitmap. To avoid this, we use the double buffering
technique: we also move the bitmaps to userspace, so that extra
bitmaps
(2010/04/12 2:21), Avi Kivity wrote:
On 04/09/2010 12:38 PM, Takuya Yoshikawa wrote:
By this patch, bitmap allocation is replaced with do_mmap() and
bitmap manipulation is replaced with *_user() functions.
Note that this does not change the APIs between kernel and user space.
To get more
I think you can keep the bitmap in userspace, but replace the vmalloc()
with get_user_pages() and vmap() (in arch/ia64). 'dirty_bitmap' can then
be in kvm-arch.
Note: this will likely break ia64 without testing. Please copy the
patches to kvm-i...@vger.kernel.org so they can test and fix them
of a dirty bitmap.
This patch fixes this problem with the introduction of a wrapper
function to calculate the sizes of dirty bitmaps.
Note: in mark_page_dirty(), we have to consider the fact that
__set_bit() takes the offset as int, not long.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
(2010/04/13 2:39), Marcelo Tosatti wrote:
On Mon, Apr 12, 2010 at 07:35:35PM +0900, Takuya Yoshikawa wrote:
This patch fixes a bug found by Avi during the review process
of my dirty bitmap related work.
To ppc and ia64 people:
The fix is really simple but touches all architectures using
BTW, just from my curiosity, are there any cases in which we use such
huge
number of pages currently?
ALIGN(memslot-npages, BITS_PER_LONG) / 8;
More than G pages need really big memory!
-- We are assuming some special cases like short int size?
No, int is 32 bits, but memslot-npages is not
Hi, this is the v2 of the moving dirty gitmaps to user space!
By this patch, I think everything we need becomes clear.
So we want to step forward to be ready for the final version in the
near future: of course, this is dependent on x86 and ppc asm issues.
BTW, by whom I can get ACK for ppc and
the get_dirty_log(). So we use this timing to update
the dirtiness of a memory slot.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c | 11 +++
arch/powerpc/kvm/book3s.c |9
We will change the vmalloc() and vfree() to do_mmap() and do_munmap()
later. This patch makes it easy and cleanup the code.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 27
remove
this wrapper and use copy_in_user() directly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c |4 +---
include/linux/kvm_host.h |3 +++
virt/kvm/kvm_main.c | 12
We are now using generic___set_le_bit() to make dirty bitmaps le.
Though this works well, we have to replace __set_bit() to appropriate
uaccess function to move dirty bitmaps to user space. So this patch
splits generic___set_le_bit() and prepares for that.
Signed-off-by: Takuya Yoshikawa
a different space to store bitmaps
which is directly updated: all we have to change are sync and get
of dirty log, so we don't need set_bit_user like function for ia64.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna
the copy of the dirty
bitmap from the kernel.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
Documentation/kvm/api.txt | 23 +++
arch/ia64/kvm/kvm-ia64.c | 19 ++-
arch/powerpc/kvm
(2010/04/20 19:54), Alexander Graf wrote:
On 20.04.2010, at 12:53, Takuya Yoshikawa wrote:
Hi, this is the v2 of the moving dirty gitmaps to user space!
By this patch, I think everything we need becomes clear.
So we want to step forward to be ready for the final version in the
near future
(2010/04/20 20:10), Alexander Graf wrote:
On 20.04.2010, at 13:02, Takuya Yoshikawa wrote:
We move dirty bitmaps to user space.
- Allocation and destruction: we use do_mmap() and do_munmap().
The new bitmap space is twice longer than the original one and we
use the additional space
(2010/04/20 20:15), Alexander Graf wrote:
On 20.04.2010, at 13:03, Takuya Yoshikawa wrote:
We can now export the addree of the bitmap created by do_mmap()
to user space. For the sake of this, we introduce a new API:
KVM_SWITCH_DIRTY_LOG: application can use this to trigger the
switch
(2010/04/20 20:33), Alexander Graf wrote:
-#define KVM_API_VERSION 12
+#define KVM_API_VERSION 13
Is there a way to keep both interfaces around for some time at least? I'd
prefer the API version not to change if not _really_ necessary.
To enable the new dirty mapping you could for example
Fernando, sorry I have changed some part of this series and forgot to
change your Signed-off-by to Cc for some parts.
So please give me any comments(objections) as replies to this mail thread.
Thanks,
Takuya
(2010/04/20 19:53), Takuya Yoshikawa wrote:
Hi, this is the v2 of the moving
: there is no
guarantee that
no one will change those functions we are using.
Signed-off-by: Takuya Yoshikawa yoshikawa...@yshtky3.kern.oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 17 +
1 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm
(2010/04/21 15:07), Takuya Yoshikawa wrote:
=== not tested ===
[PATCH sample] KVM: avoid to include an asm-generic bitops header file directly
Including asm-generic bitops headers is kind of violation: there is no
guarantee that
no one will change those functions we are using.
Signed-off
So please explain me about the commit:
1. is this really the thing you intended to do?
I think so.
2. includingasm-generic/bitops/le.h directly is OK?
-- I made a sample patch to avoid this, see below.
I don't see a problem with it, it is also included from other places.
(2010/04/21 20:12), Avi Kivity wrote:
On 04/20/2010 01:59 PM, Takuya Yoshikawa wrote:
We will replace copy_to_user() to copy_in_user() when we move
the dirty bitmaps to user space.
But sadly, we have copy_in_user() only for 64 bits architectures.
So this function should work as a wrapper
(2010/04/21 20:26), Avi Kivity wrote:
r = 0;
@@ -1858,7 +1866,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
if (memslot-is_dirty) {
kvm_flush_remote_tlbs(kvm);
n = kvm_dirty_bitmap_bytes(memslot);
- memset(memslot-dirty_bitmap, 0, n);
+ clear_user(memslot-dirty_bitmap, n);
Thanks, I can know basic rules about kvm/api .
(2010/04/21 20:46), Avi Kivity wrote:
+Type: vm ioctl
+Parameters: struct kvm_dirty_log (in/out)
+Returns: 0 on success, -1 on error
+
+/* for KVM_SWITCH_DIRTY_LOG */
+struct kvm_dirty_log {
+ __u32 slot;
+ __u32 padding;
Please put a flags
and user space, we want to update the bitmaps in user space directly.
To achive this, le bit offset with *_user() functions help us a lot.
So let us use the le bit offset calculation part by defining it as a new
macro: generic_le_bit_offset() .
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6dc9404..9ab1a77 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1192,9 +1192,7 @@ void
(2010/04/23 19:28), Avi Kivity wrote:
OK, I will do in the next version. In this RFC, I would be happy if I can
know the overall design is right or not.
Everything looks reasonable to me.
Thank you!
Do you have performance numbers? I'm interested in both measurements of
Hi Avi,
I want you look at this patch before discussing about our patch set.
This patch sould itself worth it, I belive, and shows how much improvements
we can expect from our dirty bitmap works.
Note: this will not conflict with our future works!
Thanks,
Takuya
** Simple test **
1. What we
for the ioctl was
more stable than the original one and the average time for dirty slots
was also reduced by some extent.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c | 36 ++--
1 files changed, 22 insertions(+), 14 deletions
(2010/04/27 22:18), Avi Kivity wrote:
Furthermore, the reduced allocations seem to produce good effects for
other cases too. Actually, I observed that the time for the ioctl was
more stable than the original one and the average time for dirty slots
was also reduced by some extent.
Can you
(2010/04/27 22:46), Takuya Yoshikawa wrote:
(2010/04/27 22:18), Avi Kivity wrote:
Furthermore, the reduced allocations seem to produce good effects for
other cases too. Actually, I observed that the time for the ioctl was
more stable than the original one and the average time for dirty slots
to caches too.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c | 37 +++--
1 files changed, 23 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6b2ce1d..b95a211 100644
--- a/arch/x86/kvm/x86
Hi, sorry for sending from my personal account.
The following series are all from me:
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
The 3rd version of moving dirty bitmaps to user space.
From this version, we add x86 and ppc and asm-generic people to CC lists.
[To KVM people
expect easily, the time needed to
allocate a bitmap is completely reduced. Furthermore, we can avoid the
tlb flush triggered by vmalloc() and get some good effects. In my test,
the improved ioctl was about 4 to 10 times faster than the original one
for clean slots.
Signed-off-by: Takuya Yoshikawa
before the get_dirty_log(). So we use this
timing to update is_dirty.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
CC: Avi Kivity a...@redhat.com
CC: Alexander Graf ag...@suse.de
---
arch/ia64/kvm/kvm-ia64.c | 11
We will change the vmalloc() and vfree() to do_mmap() and do_munmap() later.
This patch makes it easy and cleanup the code.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 27
During the work of KVM's dirty page logging optimization, we encountered
the need of copy_in_user() for 32-bit x86 and ppc: these will be used for
manipulating dirty bitmaps in user space.
So we implement copy_in_user() for 32-bit with existing generic copy user
helpers.
Signed-off-by: Takuya
: there is a one restriction to this macro: bitmaps must be 64-bit
aligned (see the comment in this patch).
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
CC: Avi Kivity a...@redhat.com
Cc: Thomas Gleixner t
During the work of KVM's dirty page logging optimization, we encountered
the need of copy_in_user() for 32-bit ppc and x86: these will be used for
manipulating dirty bitmaps in user space.
So we implement copy_in_user() for 32-bit with __copy_tofrom_user().
Signed-off-by: Takuya Yoshikawa
in which the author
implemented set_bit_to_user() locally using inefficient functions: see TODO
at the top of that.
Probably, this kind of need would be common for virtualization area.
So we introduce a function set_bit_user_non_atomic().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
and user space, we want to update the bitmaps in user space directly.
To achive this, le bit offset with *_user() functions help us a lot.
So let us use the le bit offset calculation part by defining it as a new
macro: generic_le_bit_offset() .
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
This is not to break the build for other architectures than x86 and ppc.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
arch/ia64/include/asm/kvm_host.h|5 +
arch/powerpc/include/asm/kvm_host.h |6
much because it's using a different place to store dirty logs
rather than the dirty bitmaps of memory slots: all we have to change
are sync and get of dirty log, so we don't need set_bit_user like
functions for ia64.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off
the documentation in this patch for precise explanations.
About performance improvement: the most important feature of switch API
is the lightness. In our test, this appeared in the form of improved
responses for GUI manipulations.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
We use new API for light dirty log access if KVM supports it.
This conflicts with Marcelo's patches. So please take this as a sample patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
kvm/include/linux/kvm.h | 11 ++
qemu-kvm.c | 81
On Tue, 04 May 2010 19:08:23 +0300
Avi Kivity a...@redhat.com wrote:
On 05/04/2010 06:03 PM, Arnd Bergmann wrote:
On Tuesday 04 May 2010, Takuya Yoshikawa wrote:
...
So let us use the le bit offset calculation part by defining it as a new
macro: generic_le_bit_offset() .
Does
(2010/05/06 22:38), Arnd Bergmann wrote:
On Wednesday 05 May 2010, Takuya Yoshikawa wrote:
Date:
Yesterday 04:59:24
That's why the bitmaps are defined as little endian u64 aligned, even on
big endian 32-bit systems. Little endian bitmaps are wordsize agnostic,
and u64 alignment ensures we can
Yes, I'm just using in kernel space: qemu has its own endian related helpers.
So if you allow us to place this macro in asm-generic/bitops/* it will help us.
No problem at all then. Thanks for the explanation.
Acked-by: Arnd Bergmanna...@arndb.de
Thanks you both. I will add your Acked-by
get.org get.opt switch.opt
slots[7].len=32768 278379 66398 64024
slots[8].len=32768 181246 270 160
slots[7].len=32768 263961 64673 64494
slots[8].len=32768 181655 265 160
slots[7].len=32768 263736 64701 64610
slots[8].len=32768 182785 267 160
slots[7].len=32768 260925 65360 65042
(2010/05/11 2:33), Gleb Natapov wrote:
On Mon, May 10, 2010 at 07:06:05PM +0300, Mohammed Gamal wrote:
On Mon, May 10, 2010 at 1:25 PM, Gleb Natapovg...@redhat.com wrote:
On Mon, May 10, 2010 at 11:16:56AM +0300, Gleb Natapov wrote:
Do not kill VM when instruction emulation fails. Inject #UD
(2010/05/11 12:43), Marcelo Tosatti wrote:
On Tue, May 04, 2010 at 10:08:21PM +0900, Takuya Yoshikawa wrote:
+How to Get
+
+Before calling this, you have to set the slot member of kvm_user_dirty_log
+to indicate the target memory slot.
+
+struct kvm_user_dirty_log {
+ __u32 slot
In usual workload, the number of dirty pages varies a lot for each
iteration
and we should gain really a lot for relatively clean cases.
Can you post such a test, for an idle large guest?
OK, I'll do!
Result of low workload test (running top during migration) first,
4GB guest
picked up
One alternative would be:
KVM_SWITCH_DIRTY_LOG passing the address of a bitmap. If the active
bitmap was clean, it returns 0, no switch performed. If the active
bitmap was dirty, the kernel switches to the new bitmap and returns 1.
And the responsability of cleaning the new bitmap could also
r = 0;
@@ -1195,11 +1232,16 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
gfn = unalias_gfn(kvm, gfn);
memslot = gfn_to_memslot_unaliased(kvm, gfn);
if (memslot memslot-dirty_bitmap) {
- unsigned long rel_gfn = gfn - memslot-base_gfn;
+
[To ppc people]
Hi, Benjamin, Paul, Alex,
Please see the patches 6,7/12. I first say sorry for that I've not tested these
yet. In that sense, these may not be in the quality for precise reviews. But I
will be happy if you would give me any comments.
Alex, could you help me? Though I have a
+static inline int set_bit_user_non_atomic(int nr, void __user *addr)
+{
+ u8 __user *p;
+ u8 val;
+
+ p = (u8 __user *)((unsigned long)addr + nr / BITS_PER_BYTE);
Does C do the + or the / first? Either way, I'd like to see brackets here :)
OK, I'll change like that! I
mark_page_dirty is called with the mmu_lock spinlock held in set_spte.
Must find a way to move it outside of the spinlock section.
Oh, it's a serious problem. I have to consider it.
Avi, Marcelo,
Sorry but I have to say that mmu_lock spin_lock problem was completely out of
my mind.
(2010/05/17 18:06), Takuya Yoshikawa wrote:
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
Sorry for pinging several times
copy_to/from_user() returns the number of bytes that could not be copied.
So we need to check if it is not zero, and in that case, we should return
the error number -EFAULT rather than directly return the return value from
copy_to/from_user().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
copy_to/from_user() returns the number of bytes that could not be copied.
So we need to check if it is not zero, and in that case, we should return
the error number -EFAULT rather than directly return the return value from
copy_to/from_user().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
We need to free newmem when vhost_set_memory() fails to complete.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
drivers/vhost/vhost.c |4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 9633a3c
(2010/06/01 19:55), Marcelo Tosatti wrote:
Sorry but I have to say that mmu_lock spin_lock problem was completely
out of
my mind. Although I looked through the code, it seems not easy to move the
set_bit_user to outside of spinlock section without breaking the
semantics of
its protection.
So
This makes it easy to change the way of allocating/freeing dirty bitmaps.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 30 +++---
1 files changed, 23 insertions
ratio was
about three when guest memory was 4GB.
Note:
Though this patch introduces some ifdefs, we tried not to mixing these
with other parts to keep the code as clean as possible.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Access to this page is mostly done through the regs member which holds
the address to this page. The exceptions are in vmx_vcpu_reset() and
kvm_free_lapic() and these both can easily be converted to using regs.
Signed-off-by: Takuya
This patch is the last part of a work which tries to split
x86_emulate_insn() into a few meaningful functions: removes unnecessary
goto statements based on the former two patches.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 18
This patch is the second part of a work which tries to split
x86_emulate_insn() into a few meaningful functions: just encapsulates
the switch statement for the two byte instruction emulation as
emulate_twobyte_insn().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86
with rc set to X86EMUL_UNHANDLEABLE will result in
returning EMULATION_FAILED which is defined as -1.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 179 +++-
1 files changed, 100 insertions(+), 79 deletions
On Thu, 10 Mar 2011 11:05:38 +0200
Avi Kivity a...@redhat.com wrote:
On 03/10/2011 09:35 AM, Takuya Yoshikawa wrote:
x86_emulate_insn() is too long and has many confusing goto statements.
This patch is the first part of a work which tries to split it into
a few meaningful functions: just
On Thu, 10 Mar 2011 11:27:30 +0200
Avi Kivity a...@redhat.com wrote:
On 03/10/2011 11:26 AM, Takuya Yoshikawa wrote:
I don't know if anyone is working on it, so feel free to send patches!
Yes, I'm interested in it. So I will take a look and try!
I was doing some live migration tests using
This work will continue until we can remove the ugly switch statements.
But I want to do this with enough care not to insert extra errors.
-- For me, this is a good opportunity to read SDM well.
So the whole work will be done in a step by step manner!
Thanks,
Takuya
--
To unsubscribe from
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
ADD, OR, ADC, SBB, AND, SUB, XOR, CMP are converted using a new macro
I6ALU(_f, _e).
CMPS, SCAS will be converted later.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 151
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
PUSH ES/CS/SS/DS/FS/GS and POP ES/SS/DS/FS/GS are converted.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 111 +++-
1 files changed, 72 insertions
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
POP is converted. RET will be converted later.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 16 ++--
1 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
PUSHA and POPA are converted.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 19 ---
1 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
PUSHF and POPF are converted.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c | 32 +---
1 files changed, 21 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm
?
+ break;
case 0xc0 ... 0xc1:
emulate_grp2(ctxt);
break;
--
1.7.1
--
Gleb.
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord
On Tue, 15 Mar 2011 11:35:07 +0200
Gleb Natapov g...@redhat.com wrote:
Why not call em_cmp() here?
I thought that I needed to check of
c-dst.type = OP_NONE; /* Disable writeback. */
later.
I mean call em_cmp() after c-dst.type = OP_NONE line, not replacing it.
I see the
On Tue, 22 Mar 2011 14:53:21 +0200
Avi Kivity a...@redhat.com wrote:
I prefer to have the patchset fully updated, even if it takes a while.
Good luck with the recovery!
Things already got back as usual, thanks.
I had expected much longer time.
BTW, is it better to wait until rc1 is
On Tue, 22 Mar 2011 14:55:57 +0200
Avi Kivity a...@redhat.com wrote:
@@ -2337,10 +2401,20 @@ static int em_mov(struct x86_emulate_ctxt *ctxt)
#define D6ALU(_f) D2bv((_f) | DstMem | SrcReg | ModRM),
\
D2bv(((_f) | DstReg | SrcMem | ModRM) ~Lock), \
On Tue, 22 Mar 2011 15:03:11 +0200
Avi Kivity a...@redhat.com wrote:
+static int em_push_es(struct x86_emulate_ctxt *ctxt)
+{
+ emulate_push_sreg(ctxt, ctxt-ops, VCPU_SREG_ES);
+ return X86EMUL_CONTINUE;
+}
I thought of adding generic sreg decoding, so we can use
On Tue, 22 Mar 2011 15:06:33 +0200
Avi Kivity a...@redhat.com wrote:
POP is converted. RET will be converted later.
There is also POP r/m (8F /0); could be done later.
OK, I'll recheck.
I want to put related things into one patch if possible.
Takuya
--
To unsubscribe from this list:
On Tue, 22 Mar 2011 15:07:20 +0200
Avi Kivity a...@redhat.com wrote:
+static int em_pusha(struct x86_emulate_ctxt *ctxt)
+{
+ return emulate_pusha(ctxt, ctxt-ops);
+}
+
You can simply rename/update emulate_pusha/emulate_popa, since they have
no other callers.
I intentionally
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
This stops CMP r/m, reg to write back the data into memory.
Pointed out by Avi.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/emulate.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Recently, emulate_push family functions started to call writeback()
during their emulation. This clearly shows that the usual writeback()
which is done at the end of x86_emulate_insn() cannot cover all cases.
Furthermore, suppressing
Takuya Yoshikawa takuya.yoshik...@gmail.com wrote:
@@ -1265,22 +1263,19 @@ int emulate_int_real(struct x86_emulate_ctxt *ctxt,
/* TODO: Add limit checks */
c-src.val = ctxt-eflags;
- emulate_push(ctxt, ops);
- rc = writeback(ctxt, ops);
+ rc = emulate_push(ctxt
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
have a bit in the decode tables to auto-disable
writeback, but not sure it is worth it.
One more question:
Why some functions in this file are defined using
static inline not just static ?
I should keep these inline ?
Takuya
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe
intimately familiar with QEMU does it.)
--
Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I may get some interest in using this tool for my debugging/testing/
self-educational porpuses, but cannot know what I can do/expect.
Heh, it's all pretty straight-forward. Fetch the sources from this tree:
git clone git://github.com/penberg/linux-kvm.git
Find something interesting
1 - 100 of 1006 matches
Mail list logo