From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is
being used for this missing function.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Acked-by: Arnd Bergmann a...@arndb.de
---
include/asm-generic/bitops
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is
being used for this missing function.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Acked-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---
arch
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Now that we have defined generic set_bit_le() we do not need to use
test_and_set_bit_le() for atomically setting a bit.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is
being used for this missing function.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Acked-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---
arch
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Now that we have defined generic set_bit_le() we do not need to use
test_and_set_bit_le() for atomically setting a bit.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa
for big-endian
case, than the generic __set_bit_le(), it should not be a problem to
use the latter since both maintainers prefer it.
Ben Hutchings (1):
sfc: Use standard __{clear,set}_bit_le() functions
Takuya Yoshikawa (4):
drivers/net/ethernet/dec/tulip: Use standard __set_bit_le() function
From: Ben Hutchings bhutchi...@solarflare.com
There are now standard functions for dealing with little-endian bit
arrays, so use them instead of our own implementations.
Signed-off-by: Ben Hutchings bhutchi...@solarflare.com
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
To introduce generic set_bit_le() later, we remove our own definition
and use a proper non-atomic bitops function: __set_bit_le().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Acked-by: Grant Grundler grund...@parisc
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Needed to replace test_and_set_bit_le() in virt/kvm/kvm_main.c which is
being used for this missing function.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Acked-by: Arnd Bergmann a...@arndb.de
---
include/asm-generic/bitops
?
Takuya Yoshikawa (3):
KVM: Stop checking rmap to see if slot is being created
KVM: MMU: Use gfn_to_rmap() instead of directly reading rmap array
KVM: Push rmap into kvm_arch_memory_slot
arch/powerpc/include/asm/kvm_host.h |1 +
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 ++--
arch/powerpc
Instead, check npages consistently. This helps to make rmap
architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
This helps to make rmap architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/mmu_audit.c |4 +---
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
Two reasons:
- x86 can integrate rmap and rmap_pde and remove heuristics in
__gfn_to_rmap().
- Some architectures do not need rmap.
Since rmap is one of the most memory consuming stuff in KVM, ppc'd
better restrict the allocation to Book3S HV.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
?
Takuya Yoshikawa (3):
KVM: Stop checking rmap to see if slot is being created
KVM: MMU: Use gfn_to_rmap() instead of directly reading rmap array
KVM: Push rmap into kvm_arch_memory_slot
arch/powerpc/include/asm/kvm_host.h |1 +
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 ++--
arch/powerpc
Instead, check npages consistently. This helps to make rmap
architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
This helps to make rmap architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/mmu_audit.c |4 +---
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
On Thu, 26 Jul 2012 17:35:13 +0800
Xiao Guangrong wrote:
> > Is this patch really safe for all architectures?
> >
> > IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
> > Isn't it possible to conflict with valid pfns?
> >
>
> See IS_ERR_VALUE():
>
> #define
On Thu, 26 Jul 2012 11:56:15 +0300
Avi Kivity wrote:
> Since my comments are better done as a separate patch, I applied all
> three patches. Thanks!
Is this patch really safe for all architectures?
IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
Isn't it possible to
On Thu, 26 Jul 2012 11:56:15 +0300
Avi Kivity a...@redhat.com wrote:
Since my comments are better done as a separate patch, I applied all
three patches. Thanks!
Is this patch really safe for all architectures?
IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
Isn't
On Thu, 26 Jul 2012 17:35:13 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Is this patch really safe for all architectures?
IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
Isn't it possible to conflict with valid pfns?
See IS_ERR_VALUE():
On Thu, 26 Jul 2012 11:56:15 +0300
Avi Kivity a...@redhat.com wrote:
Since my comments are better done as a separate patch, I applied all
three patches. Thanks!
Is this patch really safe for all architectures?
IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
Isn't
On Thu, 26 Jul 2012 17:35:13 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Is this patch really safe for all architectures?
IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
Isn't it possible to conflict with valid pfns?
See IS_ERR_VALUE():
On Wed, 18 Jul 2012 17:52:46 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
Can't understand, can you please expand more clearly?
I think mmu pages are not worth freeing under usual memory pressure,
especially when we have EPT/NPT on.
What's happening:
shrink_slab() vainly calls
On Thu, 5 Jul 2012 10:08:07 -0300
Marcelo Tosatti wrote:
> Neat.
>
> Andrea can you please ACK?
>
ping
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Thu, 5 Jul 2012 10:08:07 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
Neat.
Andrea can you please ACK?
ping
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Thu, 5 Jul 2012 10:08:07 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
Neat.
Andrea can you please ACK?
ping
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Thu, 5 Jul 2012 23:05:46 +0900
Takuya Yoshikawa takuya.yoshik...@gmail.com wrote:
On Thu, 5 Jul 2012 14:50:00 +0300
Gleb Natapov g...@redhat.com wrote:
Note that if (!nr_to_scan--) check is removed since we do not try to
free mmu pages from more than one VM.
IIRC
On Thu, 12 Jul 2012 02:02:24 +0100
Vinod, Chegu chegu_vi...@hp.com wrote:
There have been some recent fixes (from Juan) that are supposed to honor the
user requested downtime. I am in the middle of redoing some of my
experiments...and will share when they are ready (in about 3-4 days).
mmu pages as before.
Note that if (!nr_to_scan--) check is removed since we do not try to
free mmu pages from more than one VM.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Gleb Natapov g...@redhat.com
---
arch/x86/kvm/mmu.c |5 +
1 files changed, 1 insertions(+), 4
On Thu, 5 Jul 2012 14:50:00 +0300
Gleb Natapov g...@redhat.com wrote:
Note that if (!nr_to_scan--) check is removed since we do not try to
free mmu pages from more than one VM.
IIRC this was proposed in the past that we should iterate over vm list
until freeing something eventually, but
v3-v4: Resolved trace_kvm_age_page() issue -- patch 6,7
v2-v3: Fixed intersection calculations. -- patch 3, 8
Takuya
Takuya Yoshikawa (8):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
This is needed to push trace_kvm_age_page() into kvm_age_rmapp() in the
following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 16 +---
1 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
This restricts the tracing to page aging and makes it possible to
optimize kvm_handle_hva_range() further in the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 23 ++-
1 files changed, 10 insertions(+), 13 deletions
for each rmap in the range
unmap using rmap
With the preceding patches in the patch series, this made THP page
invalidation more than 5 times faster on our x86 host: the host became
more responsive during swapping the guest's memory as a result.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak
On Mon, 02 Jul 2012 15:41:30 +0300
Avi Kivity a...@redhat.com wrote:
kvm_mmu_slot_remove_write_access: same. It's hard to continue the loop
after a lockbreak though. We can switch it to be rmap based instead.
Switching to rmap based protection was on my queue before, but I wanted
to do that
v3-v4: Resolved trace_kvm_age_page() issue -- patch 6,7
v2-v3: Fixed intersection calculations. -- patch 3, 8
Takuya
Takuya Yoshikawa (8):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
This is needed to push trace_kvm_age_page() into kvm_age_rmapp() in the
following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 16 +---
1 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
On Sun, 01 Jul 2012 10:41:05 +0300
Avi Kivity a...@redhat.com wrote:
Note: in the new code we could not use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and
On Sun, 01 Jul 2012 10:41:05 +0300
Avi Kivity a...@redhat.com wrote:
Note: in the new code we could not use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and
On Thu, 28 Jun 2012 20:39:55 +0300
Avi Kivity a...@redhat.com wrote:
Note: write_count: 4 bytes, rmap_pde: 8 bytes. So we are wasting
extra paddings by packing them into lpage_info.
The wastage is quite low since it's just 4 bytes per 2MB.
Yes.
Why not just introduce a function to get
On Thu, 28 Jun 2012 20:53:47 +0300
Avi Kivity a...@redhat.com wrote:
Note: in the new code we could not use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and gfn.
I
On Thu, 28 Jun 2012 20:39:55 +0300
Avi Kivity a...@redhat.com wrote:
Note: write_count: 4 bytes, rmap_pde: 8 bytes. So we are wasting
extra paddings by packing them into lpage_info.
The wastage is quite low since it's just 4 bytes per 2MB.
Yes.
Why not just introduce a function to get
On Thu, 28 Jun 2012 20:53:47 +0300
Avi Kivity a...@redhat.com wrote:
Note: in the new code we could not use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and gfn.
I
Updated patch 3 and 6 so that unmap handler be called with exactly same
rmap arguments as before, even if kvm_handle_hva_range() is called with
unaligned [start, end).
Please see the comments I added there.
Takuya
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 37 +++--
1 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
On Thu, 28 Jun 2012 11:12:51 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
struct kvm_arch_memory_slot {
+ unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
};
It looks little complex than before - need
Updated patch 3 and 6 so that unmap handler be called with exactly same
rmap arguments as before, even if kvm_handle_hva_range() is called with
unaligned [start, end).
Please see the comments I added there.
Takuya
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 37 +++--
1 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
On Thu, 28 Jun 2012 11:12:51 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
struct kvm_arch_memory_slot {
+ unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
};
It looks little complex than before - need
On Thu, 21 Jun 2012 17:52:38 +0900
Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp wrote:
...
+ /* Handle the first one even if idx == idx_end. */
+ do {
+ ret |= handler(kvm, rmapp++, data);
+ } while
();
...
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make kvm_handle_hva() handle range of addresses
KVM: Introduce kvm_unmap_hva_range() for
kvm_mmu_notifier_invalidate_range_start()
KVM
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 31 +---
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 39 ---
1 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
I should have read this before sending v2...
On Thu, 21 Jun 2012 11:24:59 +0300
Avi Kivity a...@redhat.com wrote:
1. Separate rmap_pde from lpage_info-write_count and
make this a simple array. (I once tried this.)
This has the potential to increase cache misses, but I don't think
();
...
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make kvm_handle_hva() handle range of addresses
KVM: Introduce kvm_unmap_hva_range() for
kvm_mmu_notifier_invalidate_range_start()
KVM
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 31 +---
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
I should have read this before sending v2...
On Thu, 21 Jun 2012 11:24:59 +0300
Avi Kivity a...@redhat.com wrote:
1. Separate rmap_pde from lpage_info-write_count and
make this a simple array. (I once tried this.)
This has the potential to increase cache misses, but I don't think
On Wed, 20 Jun 2012 15:57:15 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Introduce a common function to abstract spte write-protect to
cleanup the code
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
...
+/* Return true if the spte is dropped. */
+static
On Wed, 20 Jun 2012 17:11:06 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Strange! Why do you think it is wrong? It is just debug code.
kvm_mmu_slot_remove_write_access() does not use rmap but the debug code says:
rmap_printk(rmap_write_protect: spte %p %llx\n, sptep,
On Wed, 20 Jun 2012 21:21:07 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Again, rmap does not break the logic, the spte we handle in this function
must
be in rmap.
I'm not saying whether this breaks some logic or not.
rmap_printk(rmap_write_protect: spte %p %llx\n, sptep,
On Thu, 21 Jun 2012 09:48:05 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
We can change the debug message later if needed.
Actually, i am going to use tracepoint instead of
these debug code.
That's very nice!
Then, please change the trace log to correspond to the
new
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
The following commit did not care about the error handling path:
commit c1a7b32a14138f908df52d7c53b5ce3415ec6b50
KVM: Avoid wasting pages for small lpage_info arrays
If memory allocation fails, vfree() will be called with the address
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
Potential for improvement: don't do 512 iterations on same large page.
Something like
if ((gfn ^ prev_gfn) mask(level))
ret |= handler(...)
with clever selection of the first prev_gfn so it always matches
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori anth...@codemonkey.ws wrote:
I'm not at all convinced that postcopy is a good idea. There needs a clear
expression of what the value proposition is that's backed by benchmarks.
Those
benchmarks need to include latency measurements of
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
Potential for improvement: don't do 512 iterations on same large page.
Something like
if ((gfn ^ prev_gfn) mask(level))
ret |= handler(...)
with clever selection of the first prev_gfn so it always matches
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori anth...@codemonkey.ws wrote:
I'm not at all convinced that postcopy is a good idea. There needs a clear
expression of what the value proposition is that's backed by benchmarks.
Those
benchmarks need to include latency measurements of
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
kvm_for_each_memslot(memslot, slots) {
- gfn_t gfn = hva_to_gfn(hva, memslot);
+ gfn_t gfn = hva_to_gfn(start_hva, memslot);
+ gfn_t end_gfn = hva_to_gfn(end_hva, memslot);
These
On Mon, 18 Jun 2012 16:21:20 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
[not about this patch]
EPT accessed/dirty bits will be used for more things in the future.
Are there any rules for using these bits?
Same as other bits?
Do you mean hardware rules or KVM rules?
KVM
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
kvm_for_each_memslot(memslot, slots) {
- gfn_t gfn = hva_to_gfn(hva, memslot);
+ gfn_t gfn = hva_to_gfn(start_hva, memslot);
+ gfn_t end_gfn = hva_to_gfn(end_hva, memslot);
These
Takuya Yoshikawa (4):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn() for kvm_handle_hva()
KVM: MMU: Make kvm_handle_hva() handle range of addresses
KVM: Introduce kvm_unmap_hva_range() for
kvm_mmu_notifier_invalidate_range_start()
arch/powerpc
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24dd43d..a2f3969 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
kvm_handle_hva_range() which makes the loop look like this:
for each memslot
for each guest page in memslot
unmap using rmap
In this new processing, the actual work is converted to the loop over
rmap array which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa
501 - 600 of 1354 matches
Mail list logo