On Sun, Nov 30, 2008 at 06:38:14PM +0200, Avi Kivity wrote:
> The guest allocates when it touches the page for the first time.  This 
> means very little since all of memory may be touched during guest bootup 
> or shortly afterwards.  Even if not, it is still a one-time operation, 
> and any choices we make based on it will last the lifetime of the guest.

I was more thinking about some heuristics that checks when a page
is first mapped into user space. The only problem is that it is zeroed
through the direct mapping before, but perhaps there is a way around it. 
That's one of the rare cases when 32bit highmem actually makes things easier.
It might be also easier on some other OS than Linux who don't use
direct mapping that aggressively.
> 
> >This is roughly equivalent of getting a fresh new demand fault page,
> >but doesn't require to unmap/free/remap.
> >  
> 
> Lost again, sorry.

free/unmap/remap gives you normally local memory. I tend to call
it poor man's NUMA policy API.

The alternative is to keep your own pools and allocate from the
correct pool, but then you either need pinning or getcpu()

> 
> >The tricky bit is probably figuring out what is a fresh new page for
> >the guest. That might need some paravirtual help.
> >  
> 
> The guest typically recycles its own pages (exception is ballooning).  
> Also it doesn't make sense to manage this on a per page basis as the 
> guest won't do that. 

> We need to mimic real hardware.

The underlying allocation is in pages, so the NUMA affinity can 
be as well handled by this. 

Basic algorithm:
- If guest touches virtual node that is the same as the local node
of the current vcpu assume it's a local allocation.
- On allocation get the underlying page from the correct underlying
node based on a dynamic getcpu relationship.
- Find some way to get rid of unused pages. e.g. keep track of 
the number of mappings to a page and age or use pv help.

> The static case is simple.  We allocate memory from a few nodes (for 
> small guests, only one) and establish a guest_node -> host_node 
> mapping.  vcpus on guest node X are constrained to host node according 
> to this mapping.
> 
> The dynamic case is really complicated.  We can allow vcpus to wander to 
> other cpus on cpu overcommit, but need to pull them back soonish, or 
> alternatively migrate the entire node, taking into account the cost of 
> the migration, cpu availability on the target node, and memory 
> availability on the target node.  Since the cost is so huge, this needs 
> to be done on a very coarse scale.

I wrote a scheduler that did that on 2.4 (it was called homenode scheduling),
but it never worked well on small systems. It was moderately successfull on
some big NUMA boxes though. The fundamental problem is that not using
a CPU is always worse than using remote memory on the small systems.

Always migrating memory on CPU migration is also too costly in the general
case, but it might be possible to make it work in the special case 
of vCPU guests with some tweaks.

-Andi

-- 
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to