Re: Any comments? Re: [RFC][PATCH 0/12] KVM, x86, ppc, asm-generic: moving dirty bitmaps to user space

2010-06-01 Thread Marcelo Tosatti
On Mon, May 24, 2010 at 04:05:29PM +0900, Takuya Yoshikawa wrote:
 (2010/05/17 18:06), Takuya Yoshikawa wrote:
 
 User allocated bitmaps have the advantage of reducing pinned memory.
 However we have plenty more pinned memory allocated in memory slots, so
 by itself, user allocated bitmaps don't justify this change.
 
 Sorry for pinging several times.
 
 
 In that sense, what do you think about the question I sent last week?
 
 === REPOST 1 ===
  
   mark_page_dirty is called with the mmu_lock spinlock held in set_spte.
   Must find a way to move it outside of the spinlock section.
 
 I am now trying to do something to solve this spinlock problem. But the
 spinlock section looks too wide to solve with simple workaround.
 
 Sorry but I have to say that mmu_lock spin_lock problem was completely
 out of
 my mind. Although I looked through the code, it seems not easy to move the
 set_bit_user to outside of spinlock section without breaking the
 semantics of
 its protection.
 
 So this may take some time to solve.
 
 But personally, I want to do something for x86's vmallc() every time
 problem
 even though moving dirty bitmaps to user space cannot be achieved soon.
 
 In that sense, do you mind if we do double buffering without moving
 dirty bitmaps to
 user space?
 
 So I would be happy if you give me any comments about this kind of other
 options.

What if you pin the bitmaps? 

The alternative to that is to move mark_page_dirty(gfn) before acquision 
of mmu_lock, in the page fault paths. The downside of that is a
potentially (large?) number of false positives in the dirty bitmap.

--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Any comments? Re: [RFC][PATCH 0/12] KVM, x86, ppc, asm-generic: moving dirty bitmaps to user space

2010-06-01 Thread Takuya Yoshikawa

(2010/06/01 19:55), Marcelo Tosatti wrote:


Sorry but I have to say that mmu_lock spin_lock problem was completely
out of
my mind. Although I looked through the code, it seems not easy to move the
set_bit_user to outside of spinlock section without breaking the
semantics of
its protection.

So this may take some time to solve.

But personally, I want to do something for x86's vmallc() every time
problem
even though moving dirty bitmaps to user space cannot be achieved soon.

In that sense, do you mind if we do double buffering without moving
dirty bitmaps to
user space?


So I would be happy if you give me any comments about this kind of other
options.


What if you pin the bitmaps?


Yes, pinning bitmaps works. The small problem is that we need to hold
the dirty_bitmap_pages[] array for every slot, the size of this array
depends on the slot length, and of course pinning itself.

In the performance point of view, having double sized vmalloc'ed
area may be better.



The alternative to that is to move mark_page_dirty(gfn) before acquision
of mmu_lock, in the page fault paths. The downside of that is a
potentially (large?) number of false positives in the dirty bitmap.



Interesting, but probably dangerous.


From my experience, though this includes my personal view, removing vmalloc
currently used by x86 is the most simple and effective change.

So if you don't mind, I want to double the size of vmalloc'ed area for x86
without changing other parts.

 == if this one more bitmap is problematic, dirty logging itself would be
 in danger of failure: we need to have the same size in the timing of
 switch.

Make sense?


We can consider moving dirty bitmaps to user space later.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Any comments? Re: [RFC][PATCH 0/12] KVM, x86, ppc, asm-generic: moving dirty bitmaps to user space

2010-06-01 Thread Marcelo Tosatti
On Tue, Jun 01, 2010 at 09:05:38PM +0900, Takuya Yoshikawa wrote:
 (2010/06/01 19:55), Marcelo Tosatti wrote:
 
 Sorry but I have to say that mmu_lock spin_lock problem was completely
 out of
 my mind. Although I looked through the code, it seems not easy to move the
 set_bit_user to outside of spinlock section without breaking the
 semantics of
 its protection.
 
 So this may take some time to solve.
 
 But personally, I want to do something for x86's vmallc() every time
 problem
 even though moving dirty bitmaps to user space cannot be achieved soon.
 
 In that sense, do you mind if we do double buffering without moving
 dirty bitmaps to
 user space?
 
 So I would be happy if you give me any comments about this kind of other
 options.
 
 What if you pin the bitmaps?
 
 Yes, pinning bitmaps works. The small problem is that we need to hold
 the dirty_bitmap_pages[] array for every slot, the size of this array
 depends on the slot length, and of course pinning itself.
 
 In the performance point of view, having double sized vmalloc'ed
 area may be better.
 
 
 The alternative to that is to move mark_page_dirty(gfn) before acquision
 of mmu_lock, in the page fault paths. The downside of that is a
 potentially (large?) number of false positives in the dirty bitmap.
 
 
 Interesting, but probably dangerous.
 
 
 From my experience, though this includes my personal view, removing vmalloc
 currently used by x86 is the most simple and effective change.
 
 So if you don't mind, I want to double the size of vmalloc'ed area for x86
 without changing other parts.
 
  == if this one more bitmap is problematic, dirty logging itself would be
  in danger of failure: we need to have the same size in the timing of
  switch.
 
 Make sense?

That seems the most sensible approach.

 
 We can consider moving dirty bitmaps to user space later.
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Any comments? Re: [RFC][PATCH 0/12] KVM, x86, ppc, asm-generic: moving dirty bitmaps to user space

2010-05-24 Thread Takuya Yoshikawa

(2010/05/17 18:06), Takuya Yoshikawa wrote:



User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.


Sorry for pinging several times.



In that sense, what do you think about the question I sent last week?

=== REPOST 1 ===
 
  mark_page_dirty is called with the mmu_lock spinlock held in set_spte.
  Must find a way to move it outside of the spinlock section.


I am now trying to do something to solve this spinlock problem. But the
spinlock section looks too wide to solve with simple workaround.


Sorry but I have to say that mmu_lock spin_lock problem was completely
out of
my mind. Although I looked through the code, it seems not easy to move the
set_bit_user to outside of spinlock section without breaking the
semantics of
its protection.

So this may take some time to solve.

But personally, I want to do something for x86's vmallc() every time
problem
even though moving dirty bitmaps to user space cannot be achieved soon.

In that sense, do you mind if we do double buffering without moving
dirty bitmaps to
user space?


So I would be happy if you give me any comments about this kind of other
options.

Thanks,
  Takuya




I know that the resource for vmalloc() is precious for x86 but even now,
at the timing
of get_dirty_log, we use the same amount of memory as double buffering.
=== 1 END ===




Perhaps if we optimize memory slot write protection (I have some ideas
about this) we can make the performance improvement more pronounced.



It's really nice!

Even now we can measure the performance improvement by introducing
switch ioctl
when guest is relatively idle, so the combination will be really effective!

=== REPOST 2 ===
 
  Can you post such a test, for an idle large guest?
 
  OK, I'll do!


Result of low workload test (running top during migration) first,

4GB guest
picked up slots[1](len=3757047808) only
*
get.org get.opt switch.opt

1060875 310292 190335
1076754 301295 188600
655504 318284 196029
529769 301471 325
694796 70216 221172
651868 353073 196184
543339 312865 213236
1061938 72785 203090
689527 323901 249519
621364 323881 473
1063671 70703 192958
915903 336318 174008
1046462 332384 782
1037942 72783 190655
680122 318305 243544
688156 314935 193526
558658 265934 190550
652454 372135 196270
660140 68613 352
1101947 378642 186575
... ... ...
*

As expected we've got the difference more clearly.

In this case, switch.opt reduced 1/3 (.1 msec) compared to get.opt
for each iteration.

And when the slot is cleaner, the ratio is bigger.
=== 2 END ===
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html