Hi Marcelo:

This patchset causes my RHEL3 guest to hang during boot at one of the
early sym53c8xx messages:

sym53c8xx: at PCI bus 0, device 5, functions 0

Using ide instead of scsi the guest proceeds farther, but inevitably
hangs as well. I've tried dropping the amount of ram to 1024 and varied
the number of vcpus as well (including 1 vcpu).

When it hangs kvm on the host is spinning on one of the cpus, and
kvm/qemu appears to be 1 thread short. For the kvm process I expect to
see 2 + Nvcpus threads (ps -C kvm -L). With this patchset I see 2 +
Nvcpus - 1. (e.g., I usually run with 4 vcpus, so there should be 6
threads. I see only 5).

I'm using kvm-git tip from a couple of days ago + this patch set. kvm
userspace comes from kvm-75. Resetting to kvm-git and the guest starts
up just fine.

david


Marcelo Tosatti wrote:
> Keep shadow pages temporarily out of sync, allowing more efficient guest
> PTE updates in comparison to trap-emulate + unprotect heuristics. Stolen
> from Xen :)
> 
> This version only allows leaf pagetables to go out of sync, for
> simplicity, but can be enhanced.
> 
> VMX "bypass_guest_pf" feature on prefetch_page breaks it (since new
> PTE writes need no TLB flush, I assume). Not sure if its worthwhile to
> convert notrap_nonpresent -> trap_nonpresent on unshadow or just go 
> for unconditional nonpaging_prefetch_page.
> 
> * Kernel builds on 4-way 64-bit guest improve 10% (+ 3.7% for
> get_user_pages_fast). 
> 
> * lmbench's "lat_proc fork" microbenchmark latency is 40% lower (a
> shadow worst scenario test).
> 
> * The RHEL3 highpte kscand hangs go from 5+ seconds to < 1 second.
> 
> * Windows 2003 Server, 32-bit PAE, DDK build (build -cPzM 3):
> 
> Windows 2003 Checked 64 Bit Build Environment, 256M RAM
> 1-vcpu:
> vanilla + gup_fast:         oos
> 0:04:37.375                 0:03:28.047     (- 25%)
> 
> 2-vcpus:
> vanilla + gup_fast          oos
> 0:02:32.000                 0:01:56.031     (- 23%)
> 
> 
> Windows 2003 Checked Build Environment, 1GB RAM
> 2-vcpus:
> vanilla + fast_gup         oos
> 0:02:26.078                0:01:50.110      (- 24%)
> 
> 4-vcpus:
> vanilla + fast_gup         oos
> 0:01:59.266                0:01:29.625      (- 25%)
> 
> And I think other optimizations are possible now, for example the guest
> can be responsible for remote TLB flushing on kvm_mmu_pte_write().
> 
> Please review.
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to