logging support, used by architectures that share
> >> + * comman dirty page logging implementation.
> >
> > s/comman/common/
> >
> > The approach looks sane to me, especially as it does not change other
> > architectures needlessly.
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ned-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c |2 +-
arch/powerpc/kvm/book3s_hv.c |2 +-
arch/x86/kvm/x86.c |2 +-
include/linux/kvm_host.h |1 -
virt/kvm/kvm_main.c |8
5 files changed, 3 insertions(+), 12 deletions(-)
diff --
Alex, what do you think about this?
On Thu, 23 Aug 2012 16:35:15 +0800
Gavin Shan wrote:
> On Thu, Aug 23, 2012 at 05:24:00PM +0900, Takuya Yoshikawa wrote:
> >On Thu, 23 Aug 2012 15:42:49 +0800
> >Gavin Shan wrote:
> >
> >> The build error was caused by that
On Thu, 23 Aug 2012 15:42:49 +0800
Gavin Shan wrote:
> The build error was caused by that builtin functions are calling
> the functions implemented in modules. That was introduced by the
> following commit.
>
> commit 4d8b81abc47b83a1939e59df2fdb0e98dfe0eedd
>
> The patches fix that to convert
On Thu, 9 Aug 2012 22:25:32 -0300
Marcelo Tosatti wrote:
> I'll send a patch to flush per memslot in the next days, you can work
> out the PPC details in the meantime.
Are you going to implement that using slot_bitmap?
Since I'm now converting kvm_mmu_slot_remove_write_access() to
rmap based pr
On Tue, 7 Aug 2012 12:57:13 +0200
Alexander Graf wrote:
> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
> +{
> + struct kvm_memslots *slots = kvm_memslots(kvm);
> + struct kvm_memory_slot *memslot;
> +
> + kvm_for_each_memslot(memslot, slots)
> + if
Two reasons:
- x86 can integrate rmap and rmap_pde and remove heuristics in
__gfn_to_rmap().
- Some architectures do not need rmap.
Since rmap is one of the most memory consuming stuff in KVM, ppc'd
better restrict the allocation to Book3S HV.
Signed-off-by: Takuya Yoshikawa
Cc:
This helps to make rmap architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/mmu_audit.c |4 +---
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a9a2052
Instead, check npages consistently. This helps to make rmap
architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b6379e5..701dbd4
tions?
Takuya Yoshikawa (3):
KVM: Stop checking rmap to see if slot is being created
KVM: MMU: Use gfn_to_rmap() instead of directly reading rmap array
KVM: Push rmap into kvm_arch_memory_slot
arch/powerpc/include/asm/kvm_host.h |1 +
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 ++--
arch/po
On Thu, 5 Jul 2012 10:08:07 -0300
Marcelo Tosatti wrote:
> Neat.
>
> Andrea can you please ACK?
>
ping
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
for each rmap in the range
unmap using rmap
With the preceding patches in the patch series, this made THP page
invalidation more than 5 times faster on our x86 host: the host became
more responsive during swapping the guest's memory as a result.
Signed-off-by: Takuya Yoshikawa
---
This restricts the tracing to page aging and makes it possible to
optimize kvm_handle_hva_range() further in the following patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 23 ++-
1 files changed, 10 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm
This is needed to push trace_kvm_age_page() into kvm_age_rmapp() in the
following patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 16 +---
1 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f2c408a..bf8d50e
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c |6 +++---
arch
using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c |5
al work is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm/mmu.c
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 +++---
arch/x86
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
v3->v4: Resolved trace_kvm_age_page() issue -- patch 6,7
v2->v3: Fixed intersection calculations. -- patch 3, 8
Takuya
Takuya Yoshikawa (8):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU
On Sun, 01 Jul 2012 10:41:05 +0300
Avi Kivity wrote:
> >> > Note: in the new code we could not use trace_kvm_age_page(), so we just
> >> > dropped the point from kvm_handle_hva_range().
> >> >
> >>
> >> Can't it be pushed to handler()?
> >
> > Yes, but it will be changed to print rmap, not hva
On Thu, 28 Jun 2012 20:53:47 +0300
Avi Kivity wrote:
> > Note: in the new code we could not use trace_kvm_age_page(), so we just
> > dropped the point from kvm_handle_hva_range().
> >
>
> Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and gfn.
I will do in
On Thu, 28 Jun 2012 20:39:55 +0300
Avi Kivity wrote:
> > Note: write_count: 4 bytes, rmap_pde: 8 bytes. So we are wasting
> > extra paddings by packing them into lpage_info.
>
> The wastage is quite low since it's just 4 bytes per 2MB.
Yes.
> >> Why not just introduce a function to get the ne
On Thu, 28 Jun 2012 11:12:51 +0800
Xiao Guangrong wrote:
> > struct kvm_arch_memory_slot {
> > + unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
> > struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
> > };
> >
>
> It looks little complex than before - need manage more alloc-ed/f
t use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 37 +++--
1 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 47
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c |6 +++---
arch
using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c |5
al work is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm/mmu.c
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 +++---
arch/x86
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
Updated patch 3 and 6 so that unmap handler be called with exactly same
rmap arguments as before, even if kvm_handle_hva_range() is called with
unaligned [start, end).
Please see the comments I added there.
Takuya
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up
On Thu, 21 Jun 2012 17:52:38 +0900
Takuya Yoshikawa wrote:
...
> + /* Handle the first one even if idx == idx_end. */
> + do {
> + ret |= handler(kvm, rmapp++, data);
> + } while (++idx < idx
I should have read this before sending v2...
On Thu, 21 Jun 2012 11:24:59 +0300
Avi Kivity wrote:
> > 1. Separate rmap_pde from lpage_info->write_count and
> >make this a simple array. (I once tried this.)
> >
>
> This has the potential to increase cache misses, but I don't think it's
> a
t use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 39 ---
1 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 30
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c |6 +++---
arch
using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c |5
al work is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 31 +---
arch/x86/kvm/mmu.c
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 +++---
arch/x86
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
ate_range_start();
...
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make kvm_handle_hva() handle range of addresses
KVM: Introduce kvm_unmap_hva_range() for
kvm_mmu_notifier_invalidate_rang
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity wrote:
> Potential for improvement: don't do 512 iterations on same large page.
>
> Something like
>
> if ((gfn ^ prev_gfn) & mask(level))
> ret |= handler(...)
>
> with clever selection of the first prev_gfn so it always matches (~gfn
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity wrote:
> > kvm_for_each_memslot(memslot, slots) {
> > - gfn_t gfn = hva_to_gfn(hva, memslot);
> > + gfn_t gfn = hva_to_gfn(start_hva, memslot);
> > + gfn_t end_gfn = hva_to_gfn(end_hva, memslot);
>
> These will retu
On Fri, 15 Jun 2012 20:31:44 +0900
Takuya Yoshikawa wrote:
...
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index d03eb6f..53716dd 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>
using kvm_handle_hva_range().
On our x86 host, with a minimum configuration for the guest, the
invalidation became 40% faster on average and the worst case was also
improved to the same degree.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/include/asm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa
Cc: Alexander Graf
Cc: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 12 +---
arch
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24dd43d..a2f3969 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
Takuya Yoshikawa (4):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn() for kvm_handle_hva()
KVM: MMU: Make kvm_handle_hva() handle range of addresses
KVM: Introduce kvm_unmap_hva_range() for
kvm_mmu_notifier_invalidate_range_start()
arch/powerpc
ucing kvm_handle_hva_range() which makes the loop look like this:
for each memslot
for each guest page in memslot
unmap using rmap
In this new processing, the actual work is converted to the loop over
rmap array which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikaw
On Tue, 17 Apr 2012 17:56:24 +0300
Avi Kivity wrote:
> > For live migration, range-based control may be enough duo to the locality
> > of WWS.
>
> What's WWS?
IIRC it was mentioned in a usenix paper: Writable Working Set.
May not be a commonly known concept.
Kind of working set, but is written
On Tue, 17 Apr 2012 15:41:39 +0300
Avi Kivity wrote:
> > Since there are many known algorithms to predict hot memory pages,
> > the userspace will be able to tune the frequency of GET_DIRTY_LOG for such
> > parts not to get too many faults repeatedly, if we can restrict the range
> > of pages to
On Tue, 17 Apr 2012 10:51:40 +0300
Avi Kivity wrote:
> That's true with the write protect everything approach we use now. But
> it's not true with range-based write protection, where you issue
> GET_DIRTY_LOG on a range of pages and only need to re-write-protect them.
>
> (the motivation for th
On Sun, 15 Apr 2012 12:32:59 +0300
Avi Kivity wrote:
> Just to throw another idea into the mix - we can have write-protect-less
> dirty logging, too. Instead of write protection, drop the dirty bit,
> and check it again when reading the dirty log. It might look like we're
> accessing the spte t
On Thu, 05 Apr 2012 20:02:44 +0300
Avi Kivity wrote:
> In a recent conversation, Linus persuaded me that it's time for change
> in our git workflow; the following will bring it in line with the
> current practices of most trees.
>
> The current 'master' branch will be abandoned (still available
On Thu, 29 Mar 2012 17:26:59 +0200
Avi Kivity wrote:
> > > Hm, the patch uses ->slot_bitmap which we might want to kill if we
> > > increase the number of slots dramatically, as some people want to do.
> > >
> > > btw, what happened to that patch, did it just get ignored on the list?
> >
> > I d
On Thu, 29 Mar 2012 11:44:12 +0200
Avi Kivity wrote:
> > Even without using reverse mapping we can restrict that flush easily:
> >
> > http://www.spinics.net/lists/kvm/msg68695.html
> > [PATCH] KVM: Avoid zapping unrelated shadows in
> > __kvm_set_memory_region()
> >
> > This would be be
On Wed, 28 Mar 2012 11:37:38 +0200
Avi Kivity wrote:
> > Now I see that x86 just seems to flush everything, which is quite heavy
> > handed considering how often cirrus does it, but maybe it doesn't have a
> > choice (lack of reverse mapping from GPA ?).
>
> We do have a reverse mapping, so we c
Avi Kivity wrote:
> > > Slot searching is quite fast since there's a small number of slots, and
> > > we sort the larger ones to be in the front, so positive lookups are fast.
> > > We cache negative lookups in the shadow page tables (an spte can be
> > > either "not mapped", "mapped to RAM"
(2012/01/24 23:35), Takuya Yoshikawa wrote:
On Tue, 24 Jan 2012 13:24:56 +0200
Avi Kivity wrote:
On 01/23/2012 12:42 PM, Takuya Yoshikawa wrote:
The last one is an RFC patch:
I think it is better to refactor the rmap things, if needed, before
other architectures than x86 starts large pages
On Tue, 24 Jan 2012 13:24:56 +0200
Avi Kivity wrote:
> On 01/23/2012 12:42 PM, Takuya Yoshikawa wrote:
> > The last one is an RFC patch:
> >
> > I think it is better to refactor the rmap things, if needed, before
> > other architectures than x86 starts large
problem by decoupling rmap_pde from lpage_info
write_count and making the rmap array two dimensional which holds the
old rmap_pde elements in it.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c|8
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 +++---
arch
We can also use this for PT_PAGE_TABLE_LEVEL to treat every level
uniformly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c |3 +--
include/linux/kvm_host.h |7 +++
virt/kvm/kvm_main.c |4 +---
3 files changed, 9 insertions(+), 5 deletions(-)
diff --git a
We can hide the implementation details and treat every level uniformly.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 844fcce..0e82d9d 100644
--- a/arch/x86
We want to eliminate direct access to the rmap array.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu_audit.c |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index 6eabae3..e62fa4f 100644
--- a/arch/x86/kvm
The last one is an RFC patch:
I think it is better to refactor the rmap things, if needed, before
other architectures than x86 starts large pages support.
Takuya
arch/ia64/kvm/kvm-ia64.c|8
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 +++---
arch/powerpc/kvm/book
(2011/12/26 8:35), Paul Mackerras wrote:
On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:
So if I read things correctly, this is the only case you're setting
pages as dirty. What if you have the following:
guest adds HTAB entry x
guest writes to page mapped by x
guest r
Aren't you using upstream qemu?
IIRC, ppc kvm needs to use upstream qemu.
I use qemu-kvm git version. Do you means qemu instead of qemu-kvm?
Hi, qemu 0.13.0 build passed
Yes, that what I meant!
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a me
(2010/11/19 15:01), Yang Rui Rui wrote:
Hi,
I searched the archive found some discutions about this, not fixed yet?
could someone tell, is g4 kvm available now?
Hi, (added kvm-ppc to Cc)
I'm using g4 (Mac mini box) to run KVM.
- though not tried 2.6.37-rc2 yet.
Aren't you using upstream qe
(2010/09/04 18:24), Alexander Graf wrote:
On 03.09.2010, at 10:34, Takuya Yoshikawa wrote:
This is the 2nd version of get_dirty_log cleanup.
Changelog:
In version 1, I changed the timing of copy_to_user() in the
powerpc's get_dirty_log by mistake. This time, I've kept the
We move sanity check and lock related parts to the arch independent code.
This will help future cleanups.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 14 ++
arch/powerpc/kvm/book3s.c | 14 ++
arch/powerpc/kvm/booke.c |2 +-
arch/s390/kvm/kvm
ff-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 15 ++-
arch/powerpc/kvm/book3s.c | 25 ++---
include/linux/kvm_host.h |2 --
virt/kvm/kvm_main.c | 34 --
4 files changed, 28 insertions(+), 48 deletions(-)
This is the 2nd version of get_dirty_log cleanup.
Changelog:
In version 1, I changed the timing of copy_to_user() in the
powerpc's get_dirty_log by mistake. This time, I've kept the
timing and tests on ppc box now look OK to me!
Takuya
--
To unsubscribe from this list: send the line "unsubs
(2010/07/12 17:00), Alexander Graf wrote:
On 12.07.2010, at 09:59, Takuya Yoshikawa wrote:
Are there any restrictions in KVM on ps3 linux?
Not really. The biggest thing to keep in mind is that ram is really limited. So
make sure you have a lot of swap. In fact, I used to use a PS3 for
Are there any restrictions in KVM on ps3 linux?
Not really. The biggest thing to keep in mind is that ram is really limited. So
make sure you have a lot of swap. In fact, I used to use a PS3 for development
and testing quite a lot myself, so it definitely should work.
Thanks about the info.
Hi Alex,
I've been testing dirty logging on ps3 linux for a few weeks.
- I luckily got one by chance.
Although I could find what was the main cause of breaking dirty logging,
I'm struggling with stabilizing KVM on ps3 linux apart from dirty logging.
Problem: In almost every execution of qem
(2010/06/27 16:32), Avi Kivity wrote:
On 06/25/2010 10:25 PM, Alexander Graf wrote:
On 23.06.2010, at 08:01, Takuya Yoshikawa wrote:
kvm_get_dirty_log() is a helper function for
kvm_vm_ioctl_get_dirty_log() which
is currently used by ia64 and ppc and the following is what it is doing
> This patch plus 4/4 broke dirty bitmap updating on PPC. I didn't get around
> to track down why, but I figured you should now. Is there any way to get you
> a PPC development box? A simple G4 or G5 should be 200$ on ebay by now :).
>
A simple G4 or G5, thanks for the info, I'll buy one.
I h
On Fri, 25 Jun 2010 21:25:57 +0200
Alexander Graf wrote:
>
> This patch plus 4/4 broke dirty bitmap updating on PPC. I didn't get around
> to track down why, but I figured you should now. Is there any way to get you
> a PPC development box? A simple G4 or G5 should be 200$ on ebay by now :).
>
(2010/06/23 17:48), Avi Kivity wrote:
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 801d9f3..bea6f7c 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1185,28 +1185,43 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_memory_slot *m
kvm_vm_ioctl_get_dirty_log() is now implemented as arch dependent function.
But now that we know what is actually arch dependent, we can split this into
arch dependent part and arch independent part easily.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 14
sanity checks must
be done before kvm_ia64_sync_dirty_log(), we can say that this is not working
for code sharing effectively. So we just remove this.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 20 ++--
arch/powerpc/kvm/book3s.c | 29
d-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 30 +++---
1 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index d85b5d2..5cb5865 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-i
kvm_get_dirty_log() calls copy_to_user(). So we need to narrow the
dirty_log_lock spin_lock section not to include this.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64
This patch series is for making dirty logging development, and of course
maintenance, easier. Please see individual patches for details.
Changelog
v1 -> v2:
- rebased
- booke and s390, kvm_vm_ioctl_get_dirty_log() to
kvm_arch_vm_ioctl_get_dirty_log()
Takuya
---
arch/ia64/kvm/kvm-ia
Marcelo Tosatti wrote:
> On Tue, Jun 22, 2010 at 06:03:58PM +0900, Takuya Yoshikawa wrote:
> > This patch set is for making dirty logging development, and of course
> > maintenance, easier. Please see individual patches for details.
> >
> > Takuya
> >
>
kvm_vm_ioctl_get_dirty_log() is now implemented as an arch dependent function.
But now that we know what is actually arch dependent, we can split this into
arch dependent part and arch independent part easily.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 14
and sanity checks must
be done before kvm_ia64_sync_dirty_log(), we can say that this is not working
for code sharing effectively. So we just remove it.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c | 20 ++--
arch/powerpc/kvm/book3s.c | 29
kvm_ia64_sync_dirty_log() is a helper function for kvm_vm_ioctl_get_dirty_log()
which copies ia64's arch specific dirty bitmap to general one in memslot.
So doing sanity checks in this is unnatural. We move these checks outside of
this and change the prototype appropriately.
Signed-off-by: T
kvm_get_dirty_log() calls copy_to_user(). So we need to narrow the
dirty_log_lock spin_lock section not to include this.
Signed-off-by: Takuya Yoshikawa
---
arch/ia64/kvm/kvm-ia64.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64
This patch set is for making dirty logging development, and of course
maintenance, easier. Please see individual patches for details.
Takuya
---
arch/ia64/kvm/kvm-ia64.c | 54 ++--
arch/powerpc/kvm/book3s.c | 29 ++--
arch/x86/kv
(2010/06/01 19:55), Marcelo Tosatti wrote:
Sorry but I have to say that mmu_lock spin_lock problem was completely
out of
my mind. Although I looked through the code, it seems not easy to move the
set_bit_user to outside of spinlock section without breaking the
semantics of
its protection.
So th
(2010/05/17 18:06), Takuya Yoshikawa wrote:
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
Sorry for pinging several times.
In
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
In that sense, what do you think about the question I sent last week?
=== REPOST 1 ===
>>
>
mark_page_dirty is called with the mmu_lock spinlock held in set_spte.
Must find a way to move it outside of the spinlock section.
Oh, it's a serious problem. I have to consider it.
Avi, Marcelo,
Sorry but I have to say that mmu_lock spin_lock problem was completely out of
my mind. Althou
+static inline int set_bit_user_non_atomic(int nr, void __user *addr)
+{
+ u8 __user *p;
+ u8 val;
+
+ p = (u8 __user *)((unsigned long)addr + nr / BITS_PER_BYTE);
Does C do the + or the / first? Either way, I'd like to see brackets here :)
OK, I'll change like that! I li
[To ppc people]
Hi, Benjamin, Paul, Alex,
Please see the patches 6,7/12. I first say sorry for that I've not tested these
yet. In that sense, these may not be in the quality for precise reviews. But I
will be happy if you would give me any comments.
Alex, could you help me? Though I have a pl
r = 0;
@@ -1195,11 +1232,16 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
gfn = unalias_gfn(kvm, gfn);
memslot = gfn_to_memslot_unaliased(kvm, gfn);
if (memslot&& memslot->dirty_bitmap) {
- unsigned long rel_gfn = gfn - memslot->base_gfn;
+
One alternative would be:
KVM_SWITCH_DIRTY_LOG passing the address of a bitmap. If the active
bitmap was clean, it returns 0, no switch performed. If the active
bitmap was dirty, the kernel switches to the new bitmap and returns 1.
And the responsability of cleaning the new bitmap could also b
In usual workload, the number of dirty pages varies a lot for each
iteration
and we should gain really a lot for relatively clean cases.
Can you post such a test, for an idle large guest?
OK, I'll do!
Result of "low workload test" (running top during migration) first,
4GB guest
picked u
(2010/05/11 12:43), Marcelo Tosatti wrote:
On Tue, May 04, 2010 at 10:08:21PM +0900, Takuya Yoshikawa wrote:
+How to Get
+
+Before calling this, you have to set the slot member of kvm_user_dirty_log
+to indicate the target memory slot.
+
+struct kvm_user_dirty_log {
+ __u32 slot
get.org get.opt switch.opt
slots[7].len=32768 278379 66398 64024
slots[8].len=32768 181246 270 160
slots[7].len=32768 263961 64673 64494
slots[8].len=32768 181655 265 160
slots[7].len=32768 263736 64701 64610
slots[8].len=32768 182785 267 160
slots[7].len=32768 260925 65360 65042
slots[8].len=
1 - 100 of 102 matches
Mail list logo