On Thu, Dec 22, 2011 at 11:13:15AM -0600, Anthony Liguori wrote:
> On 12/22/2011 05:01 AM, Marcelo Tosatti wrote:
> >On Thu, Dec 01, 2011 at 06:40:31PM +0100, Peter Zijlstra wrote:
> >>No virt is crap, it needs to die, its horrid, and any solution aimed
> >>squarely at virt only is shit and not wor
On Thu, 2011-12-22 at 09:01 -0200, Marcelo Tosatti wrote:
>
> > No virt is crap, it needs to die, its horrid, and any solution aimed
> > squarely at virt only is shit and not worth considering, that simple.
>
> Removing this phrase from context (feel free to object on that basis
> to the followin
On 12/22/2011 05:01 AM, Marcelo Tosatti wrote:
On Thu, Dec 01, 2011 at 06:40:31PM +0100, Peter Zijlstra wrote:
No virt is crap, it needs to die, its horrid, and any solution aimed
squarely at virt only is shit and not worth considering, that simple.
Removing this phrase from context (feel free
On Thu, Dec 01, 2011 at 06:40:31PM +0100, Peter Zijlstra wrote:
> On Wed, 2011-11-23 at 16:03 +0100, Andrea Arcangeli wrote:
> > Hi!
> >
> > On Mon, Nov 21, 2011 at 07:51:21PM -0600, Anthony Liguori wrote:
> > > Fundamentally, the entity that should be deciding what memory should be
> > > present
On Thu, Dec 01, 2011 at 06:40:31PM +0100, Peter Zijlstra wrote:
> On Wed, 2011-11-23 at 16:03 +0100, Andrea Arcangeli wrote:
> >From what I gather what you propose is to periodically unmap all user
> memory (or map it !r !w !x, which is effectively the same) and take the
> fault. This fault will
On Thu, Dec 01, 2011 at 06:36:23PM +0100, Andrea Arcangeli wrote:
> On Thu, Dec 01, 2011 at 10:55:20PM +0530, Dipankar Sarma wrote:
> > On Wed, Nov 30, 2011 at 06:41:13PM +0100, Andrea Arcangeli wrote:
> > > On Wed, Nov 30, 2011 at 09:52:37PM +0530, Dipankar Sarma wrote:
> > > > create the guest to
On Wed, 2011-11-23 at 16:03 +0100, Andrea Arcangeli wrote:
> Hi!
>
> On Mon, Nov 21, 2011 at 07:51:21PM -0600, Anthony Liguori wrote:
> > Fundamentally, the entity that should be deciding what memory should be
> > present
> > and where it should located is the kernel. I'm fundamentally opposed
On Thu, Dec 01, 2011 at 10:55:20PM +0530, Dipankar Sarma wrote:
> On Wed, Nov 30, 2011 at 06:41:13PM +0100, Andrea Arcangeli wrote:
> > On Wed, Nov 30, 2011 at 09:52:37PM +0530, Dipankar Sarma wrote:
> > > create the guest topology correctly and optimize for NUMA. This
> > > would work for us.
> >
On Wed, Nov 30, 2011 at 06:41:13PM +0100, Andrea Arcangeli wrote:
> On Wed, Nov 30, 2011 at 09:52:37PM +0530, Dipankar Sarma wrote:
> > create the guest topology correctly and optimize for NUMA. This
> > would work for us.
>
> Even on the case of 1 guest that fits in one node, you're not going to
On Wed, Nov 30, 2011 at 09:52:37PM +0530, Dipankar Sarma wrote:
> create the guest topology correctly and optimize for NUMA. This
> would work for us.
Even on the case of 1 guest that fits in one node, you're not going to
max out the full bandwidth of all memory channels with this.
qemu all can d
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
> On Wed, 2011-11-30 at 21:52 +0530, Dipankar Sarma wrote:
> >
> > Also, if at all topology changes due to migration or host kernel decisions,
> > we can make use of something like VPHN (virtual processor home node)
> > capability on Power systems t
On Wed, 2011-11-30 at 21:52 +0530, Dipankar Sarma wrote:
>
> Also, if at all topology changes due to migration or host kernel decisions,
> we can make use of something like VPHN (virtual processor home node)
> capability on Power systems to have guest kernel update its topology
> knowledge. You ca
On Wed, Nov 23, 2011 at 07:34:37PM +0100, Alexander Graf wrote:
> On 11/23/2011 04:03 PM, Andrea Arcangeli wrote:
> >Hi!
> >
> >
> >In my view the trouble of the numa hard bindings is not the fact
> >they're hard and qemu has to also decide the location (in fact it
> >doesn't need to decide the loc
On Wed, Nov 23, 2011 at 07:34:37PM +0100, Alexander Graf wrote:
> So if you define "-numa node,mem=1G,cpus=0" then QEMU should be able to
> tell the kernel that this GB of RAM actually is close to that vCPU thread.
> Of course the admin still needs to decide how to split up memory. That's
> the d
On 11/23/2011 04:03 PM, Andrea Arcangeli wrote:
Hi!
On Mon, Nov 21, 2011 at 07:51:21PM -0600, Anthony Liguori wrote:
Fundamentally, the entity that should be deciding what memory should be present
and where it should located is the kernel. I'm fundamentally opposed to trying
to make QEMU overr
Hi!
On Mon, Nov 21, 2011 at 07:51:21PM -0600, Anthony Liguori wrote:
> Fundamentally, the entity that should be deciding what memory should be
> present
> and where it should located is the kernel. I'm fundamentally opposed to
> trying
> to make QEMU override the scheduler/mm by using cpu or
On 11/21/2011 04:50 PM, Chris Wright wrote:
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
On Mon, 2011-11-21 at 21:30 +0530, Bharata B Rao wrote:
In the original post of this mail thread, I proposed a way to export
guest RAM ranges (Guest Physical Address-GPA) and their corresponding host
h
On 11/21/2011 11:03 AM, Peter Zijlstra wrote:
On Mon, 2011-11-21 at 21:30 +0530, Bharata B Rao wrote:
In the original post of this mail thread, I proposed a way to export
guest RAM ranges (Guest Physical Address-GPA) and their corresponding host
host virtual mappings (Host Virtual Address-HVA)
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
> On Mon, 2011-11-21 at 21:30 +0530, Bharata B Rao wrote:
> >
> > In the original post of this mail thread, I proposed a way to export
> > guest RAM ranges (Guest Physical Address-GPA) and their corresponding host
> > host virtual mappings (Host Vir
On Mon, 2011-11-21 at 20:03 +0200, Avi Kivity wrote:
>
> Does ms_mbind() require that its vmas in its area be completely
> contained in the region, or does it split vmas on demand? I suggest the
> latter to avoid exposing implementation details.
as implemented (which is still rather incomplete)
On 11/21/2011 05:25 PM, Peter Zijlstra wrote:
> On Mon, 2011-11-21 at 20:48 +0530, Bharata B Rao wrote:
>
> > I looked at Peter's recent work in this area.
> > (https://lkml.org/lkml/2011/11/17/204)
> >
> > It introduces two interfaces:
> >
> > 1. ms_tbind() to bind a thread to a memsched(*) grou
On Mon, 2011-11-21 at 21:30 +0530, Bharata B Rao wrote:
>
> In the original post of this mail thread, I proposed a way to export
> guest RAM ranges (Guest Physical Address-GPA) and their corresponding host
> host virtual mappings (Host Virtual Address-HVA) from QEMU (via QEMU monitor).
> The idea
On Mon, Nov 21, 2011 at 04:25:26PM +0100, Peter Zijlstra wrote:
> On Mon, 2011-11-21 at 20:48 +0530, Bharata B Rao wrote:
>
> > I looked at Peter's recent work in this area.
> > (https://lkml.org/lkml/2011/11/17/204)
> >
> > It introduces two interfaces:
> >
> > 1. ms_tbind() to bind a thread to
On Mon, 2011-11-21 at 20:48 +0530, Bharata B Rao wrote:
> I looked at Peter's recent work in this area.
> (https://lkml.org/lkml/2011/11/17/204)
>
> It introduces two interfaces:
>
> 1. ms_tbind() to bind a thread to a memsched(*) group
> 2. ms_mbind() to bind a memory region to memsched group
>
On Tue, Nov 08, 2011 at 09:33:04AM -0800, Chris Wright wrote:
> * Alexander Graf (ag...@suse.de) wrote:
> > On 29.10.2011, at 20:45, Bharata B Rao wrote:
> > > As guests become NUMA aware, it becomes important for the guests to
> > > have correct NUMA policies when they run on NUMA aware hosts.
> >
* Alexander Graf (ag...@suse.de) wrote:
> On 29.10.2011, at 20:45, Bharata B Rao wrote:
> > As guests become NUMA aware, it becomes important for the guests to
> > have correct NUMA policies when they run on NUMA aware hosts.
> > Currently limited support for NUMA binding is available via libvirt
>
* Alexander Graf [2011-10-29 21:57:38]:
>
> On 29.10.2011, at 20:45, Bharata B Rao wrote:
>
> > Hi,
> >
> > As guests become NUMA aware, it becomes important for the guests to
> > have correct NUMA policies when they run on NUMA aware hosts.
> > Currently limited support for NUMA binding is av
On 29.10.2011, at 20:45, Bharata B Rao wrote:
> Hi,
>
> As guests become NUMA aware, it becomes important for the guests to
> have correct NUMA policies when they run on NUMA aware hosts.
> Currently limited support for NUMA binding is available via libvirt
> where it is possible to apply a NUMA
Hi,
As guests become NUMA aware, it becomes important for the guests to
have correct NUMA policies when they run on NUMA aware hosts.
Currently limited support for NUMA binding is available via libvirt
where it is possible to apply a NUMA policy to the guest as a whole.
However multinode guests wo
29 matches
Mail list logo