david ahern wrote:
> I am trying, unsuccessfully so far, to get a vm running with 4 cpus. It is
> failing with a soft lockup:
>
> BUG: soft lockup detected on CPU#3!
> [<c044a05f>] softlockup_tick+0x98/0xa6
> [<c042ccd4>] update_process_times+0x39/0x5c
> [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
> [<c04049bf>] apic_timer_interrupt+0x1f/0x24
> [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
> [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
> [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
> [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
> [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
> [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
> [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
> [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
> [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
> [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
> [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
> [<c0408f60>] save_i387+0x23f/0x273
> [<c04db730>] __next_cpu+0x12/0x21
> [<c041c97f>] find_busiest_group+0x177/0x462
> [<c04031cd>] setup_sigcontext+0x10d/0x190
> [<c0453bed>] get_page_from_freelist+0x96/0x310
> [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
> [<c0415a5c>] flush_tlb_others+0x83/0xb3
> [<c0415d63>] flush_tlb_page+0x74/0x77
> [<c0454cf1>] set_page_dirty_balance+0x8/0x35
> [<c0459c1b>] do_wp_page+0x3a5/0x3bd
> [<c042e97e>] dequeue_signal+0x2d/0x9c
> [<c045af6b>] __handle_mm_fault+0x81b/0x87b
> [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
> [<c0479cac>] do_ioctl+0x1c/0x5d
> [<c0479f37>] vfs_ioctl+0x24a/0x25c
> [<c0479f91>] sys_ioctl+0x48/0x5f
> [<c0403eff>] syscall_call+0x7/0xb
>
>
> I am working with kvm-48, but also tried the 20071020 snapshot. The stuck
> code is kvm_flush_remote_tlbs():
>
> while (atomic_read(&completed) != needed) {
> cpu_relax();
> barrier();
> }
>
> which I take to mean one of the CPUs is not ack'ing the TLB flush request.
>
>
I don't think it's a cpu not responding. I've stared at the code for a
while (we had this before) and the actual IPI/ack is fine.
What's probably happening is that corruption of the mmu data structures
is causing kvm_flush_remote_tlbs() to be called repeatedly. Since it's
a very slow function, the lockup detector blames it for any lockup it
sees even though it is innocent.
[we had exactly this issue before and it was indeed fixed after an rmap
corruption was corrected]
> Is this is a known bug and any options to correct it? It works fine with 2
> vcpus, but for a comparison with xen I'd like to get the vm working with 4.
>
>
>
- please send (privately, it's big) an 'objdump -Sr' of mmu.o
- what guest are you running? if it's publicly available, I can try to
replicate it
- at what stage does the failure occur? if it's early on, we can try
running with AUDIT or DEBUG
- otherwise, I'll send debugging patches to try and see what's going on
--
Any sufficiently difficult bug is indistinguishable from a feature.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel