> @@ -1219,53 +1229,75 @@ static void nonpaging_prefetch_page(struct kvm_vcpu
> *vcpu,
>
> static void mmu_free_roots(struct kvm_vcpu *vcpu)
> {
> - int i;
> + int i, j;
>struct kvm_mmu_page *sp;
>
> - if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
> - return;
> * We only need to hook operations that are MMU writes. We hook these so
> that
> * we can use lazy MMU mode to batch these operations. We could probably
> * improve the performance of the host code if we used some of the
> information
> @@ -219,6 +359,9 @@ static void paravirt_ops_setup(void)
> p
Hi Avi,
After reading the patch, I think the hypercall batching mechanism is as follows:
1 defer the MMU-related operations and buffer them in
kvm_para_state->mmu_queue[]
2 during the flush period, kvm_mmu_op() is called to flush operations
in kvm_para_state->mmu_queue[]
3 kvm_mmu_op() generate a
> >
> > Normally swapping mechanism choose the Least Recently Used(LRU) pages
> > of a process to be swapped out. When KVM uses MMU notifier in linux
> > kernel to implement swapping for VM, could KVM choose LRU pages of a
> > VM to swap out? If so, could you give a brief description about how
> >
> > Normally swapping mechanism choose the Least Recently Used(LRU) pages
> > of a process to be swapped out. When KVM uses MMU notifier in linux
> > kernel to implement swapping for VM, could KVM choose LRU pages of a
> > VM to swap out? If so, could you give a brief description about how
> > this
> >
> > could you (or anybody) elaborate on that? the mmu-related threads show
> > lots of progress, but it's way (way) out of my league.
> >
> > AFAICT, it's about the infrastructure to later write drivers (virtio?)
> > to DMA-heavy hardware (IB, RDMA, etc). am i wrong? or is it
> > something more
Thanks for your detailed explanation :). That's quite helpful for me
to understand KVM internals.
>
> > If this is the case, see the below example:
> > 1 physical NIC interrupt is received on physical CPU 0 and host kernel
> > determines that this is a network packet targeted to the emulated NIC
>
>
> http://ols.108.redhat.com/2007/Reprints/kivity-Reprint.pdf
>
Hi Avi,
I have a question about KVM architecture after reading your paper.
It reads:
..
At the kernel level, the kernel causes the hardware
to enter guest mode. If the processor exits guest
mode due to an event such as an externa
On 2/29/08, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> On Fri, 2008-02-29 at 16:55 +0800, Zhao Forrest wrote:
> > Sorry for reposting it.
> >
> > For example,
> > 1 rdtsc() is invoked on CPU0
> > 2 process is migrated to CPU1, and rdtsc() is invoked o
Sorry for reposting it.
For example,
1 rdtsc() is invoked on CPU0
2 process is migrated to CPU1, and rdtsc() is invoked on CPU1
3 if TSC on CPU1 is slower than TSC on CPU0, can kernel guarantee
that the second rdtsc() doesn't return a value smaller than the one
returned by the first rdtsc()?
Than
For example,
1 rdtsc() is invoked on CPU0
2 process is migrated to CPU1, and rdtsc() is invoked on CPU1
3 if TSC on CPU1 is slower than TSC on CPU0, can kernel guarantee
that the second rdtsc() doesn't return a value smaller than the one
returned by the first rdtsc()?
Thanks,
Forrest
>
> I believe the patch is still necessary, since we still need to guarantee
> that a vcpu's tsc is monotonous. I think there are three issues to be
> addressed:
>
> 1. The majority of intel machines don't need the offset adjustment since
> they already have a constant rate tsc that is synchr
Avi, Eddie,
I have a kernel-newbie question related to this thread. I think that Yang's
mentioned case that TSC between different vcpus doesn't sync could
also happen with physical cpus. Namely I think a OS running on bare metal
hardware need to handle the unsynced TSC between physical cpus. But
On 2/27/08, david ahern <[EMAIL PROTECTED]> wrote:
> If you want to go with the public bridge option usermode linux tools has
> tunctl.
> e.g., http://www.user-mode-linux.org/cvs/tools/tunctl/
>
> david
>
>
> Dor Laor wrote:
> > On Wed, 2008-02-27 at 18:0
Hi experts,
I tried to setup VM network by following the instructions at
http://kvm.qumranet.com/kvmwiki/Networking. In particular I tried to
setup "public bridge", so I need /usr/sbin/tunctl. However I could not
find tunctl on my RHEL5.1 system. I also searched tunctl on CD image
and by google, b
On 10/12/07, James Dykman <[EMAIL PROTECTED]> wrote:
> Dor,
>
> I ran some netperf tests with your PV
> virtio drivers, along with some Xen PV cases
> and a few others for comparison. I thought you
> (and the list) might be interested in the numbers.
>
> I am going to start looking for bottlenecks,
16 matches
Mail list logo