Andre Przywara <[EMAIL PROTECTED]> writes:
> It also improves the one node case by pinning a guest to this node and
> avoiding access of remote memory from one VCPU.
It depends -- it's not necessarily an improvement. e.g. if it leads to
some CPUs being idle while others are oversubscribed because
On Thu, Nov 27, 2008 at 11:23:21PM +0100, Andre Przywara wrote:
> Hi,
>
> this patch series introduces multiple NUMA nodes support within KVM guests.
> This will improve the performance of guests which are bigger than one
> node (number of VCPUs and/or amount of memory) and also allows better
>
Andre Przywara wrote:
The user (or better: management application) specifies the host nodes
the guest should use: -nodes 2,3 would create a two node guest mapped to
node 2 and 3 on the host. These numbers are handed over to libnuma:
VCPUs are pinned to the nodes and the allocated guest memory is
Andi Kleen wrote:
It depends -- it's not necessarily an improvement. e.g. if it leads to
some CPUs being idle while others are oversubscribed because of the
pinning you typically lose more than you win. In general default
pinning is a bad idea in my experience.
Alternative more flexible strategi
On Sat, Nov 29, 2008 at 08:43:35PM +0200, Avi Kivity wrote:
> Andi Kleen wrote:
> >It depends -- it's not necessarily an improvement. e.g. if it leads to
> >some CPUs being idle while others are oversubscribed because of the
> >pinning you typically lose more than you win. In general default
> >pin
Andi Kleen wrote:
On Sat, Nov 29, 2008 at 08:43:35PM +0200, Avi Kivity wrote:
Andi Kleen wrote:
It depends -- it's not necessarily an improvement. e.g. if it leads to
some CPUs being idle while others are oversubscribed because of the
pinning you typically lose more than you win. In gen
> I don't think the first one works without the second. Calling getcpu()
> on startup is meaningless since the initial placement doesn't take the
Who said anything about startup? The idea behind getcpu() is to call
it every time you allocate someting.
> >
> >Anyways it's not ideal either, but
Andi Kleen wrote:
I don't think the first one works without the second. Calling getcpu()
on startup is meaningless since the initial placement doesn't take the
Who said anything about startup? The idea behind getcpu() is to call
it every time you allocate someting.
Qemu only alloca
On Sun, Nov 30, 2008 at 05:38:34PM +0200, Avi Kivity wrote:
> Andi Kleen wrote:
> >>I don't think the first one works without the second. Calling getcpu()
> >>on startup is meaningless since the initial placement doesn't take the
> >>
> >
> >Who said anything about startup? The idea behind g
Andi Kleen wrote:
Please explain. When would you call getcpu() and what would you do at
that time?
When the guest allocates on the node of its current CPU get memory on
the node pool getcpu() tells you it is running on. More tricky
is handling guest explicitely accessing other node for N
On Sun, Nov 30, 2008 at 06:38:14PM +0200, Avi Kivity wrote:
> The guest allocates when it touches the page for the first time. This
> means very little since all of memory may be touched during guest bootup
> or shortly afterwards. Even if not, it is still a one-time operation,
> and any choic
Andi Kleen wrote:
On Sun, Nov 30, 2008 at 06:38:14PM +0200, Avi Kivity wrote:
The guest allocates when it touches the page for the first time. This
means very little since all of memory may be touched during guest bootup
or shortly afterwards. Even if not, it is still a one-time operation,
On Sun, Nov 30, 2008 at 07:11:40PM +0200, Avi Kivity wrote:
> Andi Kleen wrote:
> >On Sun, Nov 30, 2008 at 06:38:14PM +0200, Avi Kivity wrote:
> >
> >>The guest allocates when it touches the page for the first time. This
> >>means very little since all of memory may be touched during guest boot
Andi Kleen wrote:
I was more thinking about some heuristics that checks when a page
is first mapped into user space. The only problem is that it is zeroed
through the direct mapping before, but perhaps there is a way around it.
That's one of the rare cases when 32bit highmem actually makes thing
> The page is allocated at an uninteresting point in time. For example,
> the boot loaded allocates a bunch of pages.
The far majority of pages are allocated when a process wants them
or the kernel uses them for file cache.
>
> >>executes. First access happens somewhat later, but still we can
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andi Kleen
Sent: Sunday, November 30, 2008 1:56 PM
To: Avi Kivity
Cc: Andi Kleen; Andre Przywara; kvm@vger.kernel.org
Subject: Re: [PATCH 0/3] KVM-userspace: add NUMA support for guests
> > The p
Andi Kleen wrote:
The page is allocated at an uninteresting point in time. For example,
the boot loaded allocates a bunch of pages.
The far majority of pages are allocated when a process wants them
or the kernel uses them for file cache.
Right. Allocated from the guest kernel's pers
Skywing wrote:
The far majority of pages are allocated when a process wants them
or the kernel uses them for file cache.
Is that not going to be fairly guest-specific? For example, Windows has a thread that
does background zeroing of unallocated pages that aren't marked as zeroed already
On Sun, Nov 30, 2008 at 10:07:01PM +0200, Avi Kivity wrote:
> Right. Allocated from the guest kernel's perspective. This may be
> different from the host kernel's perspective.
>
> Linux will delay touching memory until the last moment, Windows will not
> (likely it zeros pages on their own nod
Andi Kleen wrote:
On Sun, Nov 30, 2008 at 10:07:01PM +0200, Avi Kivity wrote:
Right. Allocated from the guest kernel's perspective. This may be
different from the host kernel's perspective.
Linux will delay touching memory until the last moment, Windows will not
(likely it zeros pages on
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Avi Kivity
Sent: Sunday, November 30, 2008 4:50 PM
To: Andi Kleen
Cc: Andre Przywara; kvm@vger.kernel.org
Subject: Re: [PATCH 0/3] KVM-userspace: add NUMA support for guests
> Well, testing is the only
Avi Kivity wrote:
Andre Przywara wrote:
The user (or better: management application) specifies the host nodes
the guest should use: -nodes 2,3 would create a two node guest mapped to
node 2 and 3 on the host. These numbers are handed over to libnuma:
VCPUs are pinned to the nodes and the allocat
Andre Przywara wrote:
Avi Kivity wrote:
Andre Przywara wrote:
The user (or better: management application) specifies the host nodes
the guest should use: -nodes 2,3 would create a two node guest
mapped to
node 2 and 3 on the host. These numbers are handed over to libnuma:
VCPUs are pinned to
On Mon, Dec 01, 2008 at 03:15:19PM +0100, Andre Przywara wrote:
> Avi Kivity wrote:
> >>Node over-committing is allowed (-nodes 0,0,0,0), omitting the -nodes
> >>parameter reverts to the old behavior.
> >
> >'-nodes' is too generic a name ('node' could also mean a host). Suggest
> >-numanode.
> >
Daniel P. Berrange wrote:
The only problem is the default option for the host side, as libnuma
requires to explicitly name the nodes. Maybe make the pin: part _not_
optional? I would at least want to pin the memory, one could discuss
about the VCPUs...
I think keeping it optional makes t
Andre Przywara wrote:
Hi,
this patch series introduces multiple NUMA nodes support within KVM
guests.
This will improve the performance of guests which are bigger than one
node (number of VCPUs and/or amount of memory) and also allows better
balancing by taking better usage of each node's mem
Avi Kivity wrote:
Andre Przywara wrote:
Any other useful commands for the monitor? Maybe (temporary) VCPU
migration without page migration?
Right now vcpu migration is done externally (we export the thread IDs
so management can pin them as it wishes). If we add numa support, I
think it mak
Anthony Liguori wrote:
numactl --offset=0G --length=1G --membind=0 --file /dev/shm/A --touch
numactl --offset=1G --length=1G --membind=1 --file /dev/shm/A --touch
And then create the VM with:
qemu-system-x86_64 -mem-path /dev/shm/A -mem 2G ...
What's best about this approach, is that you
Anthony Liguori wrote:
Andre Przywara wrote:
Hi,
this patch series introduces multiple NUMA nodes support within KVM
guests.
This will improve the performance of guests which are bigger than one
node (number of VCPUs and/or amount of memory) and also allows better
balancing by taking better
Anthony Liguori wrote:
Avi Kivity wrote:
Andre Przywara wrote:
Any other useful commands for the monitor? Maybe (temporary) VCPU
migration without page migration?
Right now vcpu migration is done externally (we export the thread IDs
so management can pin them as it wishes). If we add numa
Avi Kivity wrote:
Anthony Liguori wrote:
I see no compelling reason to do cpu placement internally. It can be
done quite effectively externally.
Memory allocation is tough, but I don't think it's out of reach.
Looking at the numactl man page, you can do:
numactl --offset=1G --length=1
31 matches
Mail list logo