Re: [PATCH 3/3] add support for change_pte mmu notifiers

2009-09-11 Thread Izik Eidus
Marcelo Tosatti wrote: On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote: this is needed for kvm if it want ksm to directly map pages into its shadow page tables. Signed-off-by: Izik Eidus --- arch/x86/include/asm/kvm_host.h |1 + arch/x86/kvm/mmu.c | 70 +++

Re: [PATCH 2/3] add SPTE_HOST_WRITEABLE flag to the shadow ptes

2009-09-11 Thread Izik Eidus
Marcelo Tosatti wrote: On Thu, Sep 10, 2009 at 07:38:57PM +0300, Izik Eidus wrote: this flag notify that the host physical page we are pointing to from the spte is write protected, and therefore we cant change its access to be write unless we run get_user_pages(write = 1). (this is needed fo

losing mouse location with vnc

2009-09-11 Thread Ross Boylan
When I try to use a (Linux) VM via vnc there appear to be two mouse locations at once. One is the pointer displayed on the screen; the other is the shown as a little box by krdc when I select "always show local cursor" in the krdc menu. It also appears when I use xtightvncviewer. The two locatio

Re: kvm scaling question

2009-09-11 Thread Andre Przywara
Marcelo Tosatti wrote: On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote: I am wondering if anyone has investigated how well kvm scales when supporting many guests, or many vcpus or both. I'll do some investigations into the per vm memory overhead and play with bumping the max vcpu

Re: kvm scaling question

2009-09-11 Thread Marcelo Tosatti
On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote: > I am wondering if anyone has investigated how well kvm scales when supporting > many guests, or many vcpus or both. > > I'll do some investigations into the per vm memory overhead and > play with bumping the max vcpu limit way beyond

Re: [PATCH 3/3] add support for change_pte mmu notifiers

2009-09-11 Thread Marcelo Tosatti
On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote: > this is needed for kvm if it want ksm to directly map pages into its > shadow page tables. > > Signed-off-by: Izik Eidus > --- > arch/x86/include/asm/kvm_host.h |1 + > arch/x86/kvm/mmu.c | 70 ++

Re: [PATCH 2/3] add SPTE_HOST_WRITEABLE flag to the shadow ptes

2009-09-11 Thread Marcelo Tosatti
On Thu, Sep 10, 2009 at 07:38:57PM +0300, Izik Eidus wrote: > this flag notify that the host physical page we are pointing to from > the spte is write protected, and therefore we cant change its access > to be write unless we run get_user_pages(write = 1). > > (this is needed for change_pte suppor

Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-09-11 Thread Gregory Haskins
Gregory Haskins wrote: [snip] > > FWIW: VBUS handles this situation via the "memctx" abstraction. IOW, > the memory is not assumed to be a userspace address. Rather, it is a > memctx-specific address, which can be userspace, or any other type > (including hardware, dma-engine, etc). As long a

Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-09-11 Thread Gregory Haskins
Ira W. Snyder wrote: > On Mon, Sep 07, 2009 at 01:15:37PM +0300, Michael S. Tsirkin wrote: >> On Thu, Sep 03, 2009 at 11:39:45AM -0700, Ira W. Snyder wrote: >>> On Thu, Aug 27, 2009 at 07:07:50PM +0300, Michael S. Tsirkin wrote: What it is: vhost net is a character device that can be used to r

Re: kvm scaling question

2009-09-11 Thread Javier Guerra
On Fri, Sep 11, 2009 at 10:36 AM, Bruce Rogers wrote: > Also, when I did a simple experiment with vcpu overcommitment, I was > surprised how quickly performance suffered (just bringing a Linux vm up), > since I would have assumed the additional vcpus would have been halted the > vast majority o

kvm scaling question

2009-09-11 Thread Bruce Rogers
I am wondering if anyone has investigated how well kvm scales when supporting many guests, or many vcpus or both. I'll do some investigations into the per vm memory overhead and play with bumping the max vcpu limit way beyond 16, but hopefully someone can comment on issues such as locking probl

RE: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-09-11 Thread Xin, Xiaohui
Michael, We are very interested in your patch and want to have a try with it. I have collected your 3 patches in kernel side and 4 patches in queue side. The patches are listed here: PATCHv5-1-3-mm-export-use_mm-unuse_mm-to-modules.patch PATCHv5-2-3-mm-reduce-atomic-use-on-use_mm-fast-path.patch P

[KVM-AUTOTEST PATCH v2 3/3] [RFC] KVM test: client parallel test execution

2009-09-11 Thread Michael Goldish
(Difference from previous version: make sure tests that share dependencies, but do not necessarily depend on each other, run in the same pipeline.) This patch adds a control.parallel file that runs several test execution pipelines in parallel. The number of pipelines is set to the number of CPUs

[KVM-AUTOTEST PATCH v2 2/3] [RFC] KVM test: kvm_tests.cfg.sample: add some scheduler params

2009-09-11 Thread Michael Goldish
(Difference from previous version: make sure timedrift is executed alone. This should probably be a temporary solution until we find a better one, like making sure timedrift is not executed in parallel to itself, while allowing it to run in parallel to other tests.) used_cpus denotes the number of

[KVM-AUTOTEST PATCH v2 1/3] KVM test: use a better source for random numbers

2009-09-11 Thread Michael Goldish
Use random.SystemRandom() (which uses /dev/urandom) in kvm_utils.generate_random_string(). Currently, when running multiple jobs in parallel, the generated strings occasionally collide, and this is very bad. Also, don't seed the random number generator in kvm.py. This is not necessary and is prob

[ kvm-Bugs-2826486 ] Clock speed in FreeBSD

2009-09-11 Thread SourceForge.net
Bugs item #2826486, was opened at 2009-07-24 11:16 Message generated for change (Comment added) made by rmdir You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2826486&group_id=180599 Please note that this message will contain a full copy of the comment thr