Marcelo Tosatti wrote:
On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote:
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 70 +++
Marcelo Tosatti wrote:
On Thu, Sep 10, 2009 at 07:38:57PM +0300, Izik Eidus wrote:
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed fo
When I try to use a (Linux) VM via vnc there appear to be two mouse
locations at once. One is the pointer displayed on the screen; the
other is the shown as a little box by krdc when I select "always show
local cursor" in the krdc menu. It also appears when I use
xtightvncviewer.
The two locatio
Marcelo Tosatti wrote:
On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote:
I am wondering if anyone has investigated how well kvm scales when supporting
many guests, or many vcpus or both.
I'll do some investigations into the per vm memory overhead and
play with bumping the max vcpu
On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote:
> I am wondering if anyone has investigated how well kvm scales when supporting
> many guests, or many vcpus or both.
>
> I'll do some investigations into the per vm memory overhead and
> play with bumping the max vcpu limit way beyond
On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote:
> this is needed for kvm if it want ksm to directly map pages into its
> shadow page tables.
>
> Signed-off-by: Izik Eidus
> ---
> arch/x86/include/asm/kvm_host.h |1 +
> arch/x86/kvm/mmu.c | 70 ++
On Thu, Sep 10, 2009 at 07:38:57PM +0300, Izik Eidus wrote:
> this flag notify that the host physical page we are pointing to from
> the spte is write protected, and therefore we cant change its access
> to be write unless we run get_user_pages(write = 1).
>
> (this is needed for change_pte suppor
Gregory Haskins wrote:
[snip]
>
> FWIW: VBUS handles this situation via the "memctx" abstraction. IOW,
> the memory is not assumed to be a userspace address. Rather, it is a
> memctx-specific address, which can be userspace, or any other type
> (including hardware, dma-engine, etc). As long a
Ira W. Snyder wrote:
> On Mon, Sep 07, 2009 at 01:15:37PM +0300, Michael S. Tsirkin wrote:
>> On Thu, Sep 03, 2009 at 11:39:45AM -0700, Ira W. Snyder wrote:
>>> On Thu, Aug 27, 2009 at 07:07:50PM +0300, Michael S. Tsirkin wrote:
What it is: vhost net is a character device that can be used to r
On Fri, Sep 11, 2009 at 10:36 AM, Bruce Rogers wrote:
> Also, when I did a simple experiment with vcpu overcommitment, I was
> surprised how quickly performance suffered (just bringing a Linux vm up),
> since I would have assumed the additional vcpus would have been halted the
> vast majority o
I am wondering if anyone has investigated how well kvm scales when supporting
many guests, or many vcpus or both.
I'll do some investigations into the per vm memory overhead and play with
bumping the max vcpu limit way beyond 16, but hopefully someone can comment on
issues such as locking probl
Michael,
We are very interested in your patch and want to have a try with it.
I have collected your 3 patches in kernel side and 4 patches in queue side.
The patches are listed here:
PATCHv5-1-3-mm-export-use_mm-unuse_mm-to-modules.patch
PATCHv5-2-3-mm-reduce-atomic-use-on-use_mm-fast-path.patch
P
(Difference from previous version: make sure tests that share dependencies, but
do not necessarily depend on each other, run in the same pipeline.)
This patch adds a control.parallel file that runs several test execution
pipelines in parallel.
The number of pipelines is set to the number of CPUs
(Difference from previous version: make sure timedrift is executed alone.
This should probably be a temporary solution until we find a better one, like
making sure timedrift is not executed in parallel to itself, while allowing it
to run in parallel to other tests.)
used_cpus denotes the number of
Use random.SystemRandom() (which uses /dev/urandom) in
kvm_utils.generate_random_string().
Currently, when running multiple jobs in parallel, the generated strings
occasionally collide, and this is very bad.
Also, don't seed the random number generator in kvm.py. This is not necessary
and is prob
Bugs item #2826486, was opened at 2009-07-24 11:16
Message generated for change (Comment added) made by rmdir
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2826486&group_id=180599
Please note that this message will contain a full copy of the comment thr
16 matches
Mail list logo