Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-18 Thread Avi Kivity
Marcelo Tosatti wrote:
 On Mon, Mar 17, 2008 at 04:41:18PM +0200, Avi Kivity wrote:
   
 Marcelo Tosatti wrote:
 
 While aging is not too hard to do, I don't think it would add much in 
 practice; we rarely observe mmu shadow pages being recycled due to 
 memory pressure.  So this is mostly helpful for preventing a VM from 
 pinning memory when under severe memory pressure, where we don't expect 
 good performance anyway.

 
 Issue is that the shrinker callback will not be called only under
 severe memory pressure, but for normal system pressure too.

  
   
 How much shrinkage goes on under normal pressure?
 

 It depends on the number of LRU pages scanned and the size of the cache.

 Roughly the number of LRU pages scanned divided by shrinker-seeks,
 relative to cache size (mm/vmscan.c shrink_slab).

   

Since the maximum cache size is a small fraction of memory size, I think 
we should be okay here.

 Rebuilding a single shadow page costs a maximum of 512 faults (so about 
 1 msec).  If the shrinker evicts one entry per second, this is a 
 performance hiy of 0.1%.

 Perhaps if we set the cost high enough, the normal eviction rate will be 
 low enough.
 

 I think its pretty easy to check for the referenced bit on pages to
 avoid recently used ones from being zapped.
   

Not so easy:

- the pages don't have an accessed bit, the parent ptes do, so we need 
to scan the parent ptes list
- pages start out referenced, so we need to age them in two stages: 
first clear the accessed bits (and move them back to the tail of the 
queue); if we find a page on the head with all accessed bits clear, we 
can throw it away.
- root pages don't have parent ptes, so we need to track access to them 
manually
- if the accessed bit clearing rate is too high, it loses its meaning

Nothing horribly hard, but not trivial either.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-17 Thread Marcelo Tosatti
On Sun, Mar 16, 2008 at 01:28:43PM +0200, Avi Kivity wrote:
 Marcelo Tosatti wrote:
  On Wed, Mar 12, 2008 at 08:13:41PM +0200, Izik Eidus wrote:

  this patch simply register the mmu cache with the shrinker.
  
 
  Hi Izik,
 
  Nice.
 
  I think you want some sort of aging mechanism here. Walk through all
  translations of a shadow page clearing the referenced bit of all
  mappings it holds (and moving pages with any accessed translation to the
  head of the list).

 
 While aging is not too hard to do, I don't think it would add much in 
 practice; we rarely observe mmu shadow pages being recycled due to 
 memory pressure.  So this is mostly helpful for preventing a VM from 
 pinning memory when under severe memory pressure, where we don't expect 
 good performance anyway.

Issue is that the shrinker callback will not be called only under
severe memory pressure, but for normal system pressure too.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-17 Thread Avi Kivity
Marcelo Tosatti wrote:

 While aging is not too hard to do, I don't think it would add much in 
 practice; we rarely observe mmu shadow pages being recycled due to 
 memory pressure.  So this is mostly helpful for preventing a VM from 
 pinning memory when under severe memory pressure, where we don't expect 
 good performance anyway.
 

 Issue is that the shrinker callback will not be called only under
 severe memory pressure, but for normal system pressure too.

   

How much shrinkage goes on under normal pressure?

Rebuilding a single shadow page costs a maximum of 512 faults (so about 
1 msec).  If the shrinker evicts one entry per second, this is a 
performance hiy of 0.1%.

Perhaps if we set the cost high enough, the normal eviction rate will be 
low enough.

-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-17 Thread Marcelo Tosatti
On Mon, Mar 17, 2008 at 04:41:18PM +0200, Avi Kivity wrote:
 Marcelo Tosatti wrote:
 
 While aging is not too hard to do, I don't think it would add much in 
 practice; we rarely observe mmu shadow pages being recycled due to 
 memory pressure.  So this is mostly helpful for preventing a VM from 
 pinning memory when under severe memory pressure, where we don't expect 
 good performance anyway.
 
 
 Issue is that the shrinker callback will not be called only under
 severe memory pressure, but for normal system pressure too.
 
   
 
 How much shrinkage goes on under normal pressure?

It depends on the number of LRU pages scanned and the size of the cache.

Roughly the number of LRU pages scanned divided by shrinker-seeks,
relative to cache size (mm/vmscan.c shrink_slab).

 Rebuilding a single shadow page costs a maximum of 512 faults (so about 
 1 msec).  If the shrinker evicts one entry per second, this is a 
 performance hiy of 0.1%.
 
 Perhaps if we set the cost high enough, the normal eviction rate will be 
 low enough.

I think its pretty easy to check for the referenced bit on pages to
avoid recently used ones from being zapped.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-16 Thread Avi Kivity
Marcelo Tosatti wrote:
 On Wed, Mar 12, 2008 at 08:13:41PM +0200, Izik Eidus wrote:
   
 this patch simply register the mmu cache with the shrinker.
 

 Hi Izik,

 Nice.

 I think you want some sort of aging mechanism here. Walk through all
 translations of a shadow page clearing the referenced bit of all
 mappings it holds (and moving pages with any accessed translation to the
 head of the list).
   

While aging is not too hard to do, I don't think it would add much in 
practice; we rarely observe mmu shadow pages being recycled due to 
memory pressure.  So this is mostly helpful for preventing a VM from 
pinning memory when under severe memory pressure, where we don't expect 
good performance anyway.


-- 
error compiling committee.c: too many arguments to function


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-13 Thread Marcelo Tosatti
On Thu, Mar 13, 2008 at 01:23:23AM +0200, Izik Eidus wrote:
 Marcelo Tosatti wrote:
 On Wed, Mar 12, 2008 at 08:13:41PM +0200, Izik Eidus wrote:
   
 this patch simply register the mmu cache with the shrinker.
 
 
 Hi Izik,
   
 Hello Marcelo,
 
 Nice.
 
 I think you want some sort of aging mechanism here. 
 
 well it is long time in the todo list to do some kind of lru for the 
 shadow mmu pages
 right now it recycle pages in a random way...
 
 Walk through all
 translations of a shadow page clearing the referenced bit of all
 mappings it holds (and moving pages with any accessed translation to the
 head of the list).
   
 ok, i think i will just add a function named sort_accessed_mmu_pages,
 that will just put to the top of the list the pages pointed by the ptes 
 that werent accessed
 and used it when i shrink, and when pages get recycled
 
 this what you meant right?

By top I suppose you mean end. So yes, right.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


[kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-12 Thread Izik Eidus
this patch simply register the mmu cache with the shrinker.


0004-KVM-register-the-kvm-mmu-cache-with-the-shrinker.patch
Description: application/mbox
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-12 Thread Anthony Liguori
Izik Eidus wrote:
 this patch simply register the mmu cache with the shrinker.

Please inline patches in the future as it makes it easier to review.  
The implementation looks good and I think it's a good idea.

One is that there is one shrinker for all VMs but you run through the 
list of VMs in order.  This means the first VM in the list is most 
frequently going to be shrunk down to KVM_MIN_ALLOC_MMU_PAGES.  This 
seems unfair and potentially dangerous.  The shrinker can be triggered 
potentially by the growth of the MMU cache on other VMs.

I think in the least, you should attempt to go through the VMs in a 
round-robin fashion to ensure that if you shrink one VM, the next time 
you'll shrink a different VM.

The other thing I wonder about is whether DEFAULT_SEEKS is the best 
value to use.  On the one hand, a guest page fault is probably not as 
expensive as reclaiming something from disk.  On the other hand, NPT 
guests are likely to be very sensitive to evicting things from the 
shadow page cache.  I would think it's pretty clear that in the NPT 
case, the MMU cache should have a higher seek cost than the default.

Regards,

Anthony Liguori

 

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 

 ___
 kvm-devel mailing list
 kvm-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/kvm-devel
   


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-12 Thread Marcelo Tosatti
On Wed, Mar 12, 2008 at 08:13:41PM +0200, Izik Eidus wrote:
 this patch simply register the mmu cache with the shrinker.

Hi Izik,

Nice.

I think you want some sort of aging mechanism here. Walk through all
translations of a shadow page clearing the referenced bit of all
mappings it holds (and moving pages with any accessed translation to the
head of the list).

Because the active_mmu list position only indicates the order in which
those pages have been shadowed, not how frequently or recently they have
been accessed.

And then have a maximum number of pages that you walk (nr_to_scan) on
each shrinker callback run. Oh, I don't think you want to free more than
one page on each run (right now you can free a large of chunk per run).


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-12 Thread Izik Eidus
Marcelo Tosatti wrote:
 On Wed, Mar 12, 2008 at 08:13:41PM +0200, Izik Eidus wrote:
   
 this patch simply register the mmu cache with the shrinker.
 

 Hi Izik,
   
Hello Marcelo,

 Nice.

 I think you want some sort of aging mechanism here. 

well it is long time in the todo list to do some kind of lru for the 
shadow mmu pages
right now it recycle pages in a random way...

 Walk through all
 translations of a shadow page clearing the referenced bit of all
 mappings it holds (and moving pages with any accessed translation to the
 head of the list).
   
ok, i think i will just add a function named sort_accessed_mmu_pages,
that will just put to the top of the list the pages pointed by the ptes 
that werent accessed
and used it when i shrink, and when pages get recycled

this what you meant right?
 Because the active_mmu list position only indicates the order in which
 those pages have been shadowed, not how frequently or recently they have
 been accessed.
   

yep

 And then have a maximum number of pages that you walk (nr_to_scan) on
 each shrinker callback run. Oh, I don't think you want to free more than
 one page on each run (right now you can free a large of chunk per run).

   

thanks.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] shrinker support for the mmu cache

2008-03-12 Thread Izik Eidus
Anthony Liguori wrote:
 Izik Eidus wrote:
 this patch simply register the mmu cache with the shrinker.

 Please inline patches in the future as it makes it easier to review.  
I knew this time will come when ppl will force me to send patchs inline 
(will happen next time )... :)

 The implementation looks good and I think it's a good idea.

 One is that there is one shrinker for all VMs but you run through the 
 list of VMs in order.  This means the first VM in the list is most 
 frequently going to be shrunk down to KVM_MIN_ALLOC_MMU_PAGES.  This 
 seems unfair and potentially dangerous.  The shrinker can be triggered 
 potentially by the growth of the MMU cache on other VMs.

 I think in the least, you should attempt to go through the VMs in a 
 round-robin fashion to ensure that if you shrink one VM, the next time 
 you'll shrink a different VM.

you are 100% right, i will do that.


 The other thing I wonder about is whether DEFAULT_SEEKS is the best 
 value to use.  On the one hand, a guest page fault is probably not as 
 expensive as reclaiming something from disk.  On the other hand, NPT 
 guests are likely to be very sensitive to evicting things from the 
 shadow page cache.  I would think it's pretty clear that in the NPT 
 case, the MMU cache should have a higher seek cost than the default.

let me look at this, i think you have a case


 Regards,

 Anthony Liguori

 

 - 

 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 

 ___
 kvm-devel mailing list
 kvm-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/kvm-devel
   



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel