I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel. david
Farkas Levente wrote: > Avi Kivity wrote: >> If you're having trouble on AMD systems, please try this out. > > this version worse than kvm-50:-( > setup: > - host: > - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > - Intel S3000AHV > - 8GB RAM > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 x86_64 64bit > - guest-1: > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 i386 32bit > - guest-2: > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 x86_64 64bit > - guest-3: > - Mandrake-9 > - kernel-2.4.19.16mdk-1-1mdk 32bit > - guest-4: > - Windows XP Professional 32bit > smp not working on any centos guest (guests are hang during boot). even > the host crash. the worst thing is the host crash during boot with > another stack trace which i was not able to log. > i really would like to see some kind of stable version other then > kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and > arch rearrange, but wouldn't it be better to fix these basic issues > first? like running two smp guest (32 and 64) on 64 smp host, just to > boot until the login screen. > this is when the guest stop and the host dump it: > ------------------------------------------------------------ > Ignoring de-assert INIT to vcpu 1 > SIPI to vcpu 1 vector 0x06 > SIPI to vcpu 1 vector 0x06 > eth0: topology change detected, propagating > eth0: port 3(vnet1) entering forwarding state > Ignoring de-assert INIT to vcpu 2 > SIPI to vcpu 2 vector 0x06 > SIPI to vcpu 2 vector 0x06 > Ignoring de-assert INIT to vcpu 3 > SIPI to vcpu 3 vector 0x06 > SIPI to vcpu 3 vector 0x06 > BUG: soft lockup detected on CPU#1! > > Call Trace: > <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed > [<ffffffff80093493>] update_process_times+0x42/0x68 > [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 > [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 > [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c > <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 > [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 > [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 > [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330 > [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5 > [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f > [<ffffffff80015e7e>] __pte_alloc+0x122/0x142 > [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c > [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290 > [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4 > [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6 > [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388 > [<ffffffff8008515c>] __wake_up_common+0x3e/0x68 > [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9 > [<ffffffff80117410>] avc_has_perm+0x43/0x55 > [<ffffffff80117f47>] inode_has_perm+0x56/0x63 > [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290 > [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3 > [<ffffffff80117fe8>] file_has_perm+0x94/0xa3 > [<ffffffff8003fca8>] do_ioctl+0x21/0x6b > [<ffffffff8002faae>] vfs_ioctl+0x248/0x261 > [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78 > [<ffffffff8005b349>] tracesys+0xd1/0xdc > ------------------------------------------------------------ > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel