For the benefit of the last commenter and anyone else who comes across
this ticket:

As determined on the mailing list in June, the bug appears to be with
KVM's apicv on processors that support the feature. I haven't heard
anything about a fix, but the best workaround is to disable apicv when
loading the KVM kernel module, e.g.:

# modprobe kvm_intel enable_apicv=N

You can verify the parameter by checking the contents of
/sys/module/kvm_intel/parameters/enable_apicv.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1329956

Title:
  multi-core FreeBSD guest hangs after warm reboot

Status in QEMU:
  Incomplete

Bug description:
  On some Linux KVM hosts in our environment, FreeBSD guests fail to
  reboot properly if they have more than one CPU (socket, core, and/or
  thread). They will boot fine the first time, but after issuing a
  "reboot" command via the OS the guest starts to boot but hangs during
  SMP initialization. Fully shutting down and restarting the guest works
  in all cases.

  The only meaningful difference between hosts with the problem and those 
without is the CPU. Hosts with Xeon E5-26xx v2 processors have the problem, 
including at least the "Intel(R) Xeon(R) CPU E5-2667 v2" and the "Intel(R) 
Xeon(R) CPU E5-2650 v2".
  Hosts with any other CPU, including "Intel(R) Xeon(R) CPU E5-2650 0", 
"Intel(R) Xeon(R) CPU E5-2620 0", or "AMD Opteron(TM) Processor 6274" do not 
have the problem. Note the "v2" in the names of the problematic CPUs.

  On hosts with a "v2" Xeon, I can reproduce the problem under Linux
  kernel 3.10 or 3.12 and Qemu 1.7.0 or 2.0.0.

  The problem occurs with all currently-supported versions of FreeBSD,
  including 8.4, 9.2, 10.0 and 11-CURRENT.

  On a Linux KVM host with a "v2" Xeon, this command line is adequate to
  reproduce the problem:

  /usr/bin/qemu-system-x86_64 -machine accel=kvm -name bsdtest -m 512
  -smp 2,sockets=1,cores=1,threads=2 -drive
  file=./20140613_FreeBSD_9.2-RELEASE_ufs.qcow2,if=none,id=drive0,format=qcow2
  -device virtio-blk-pci,scsi=off,drive=drive0 -vnc 0.0.0.0:0 -net none

  I have tried many variations including different models of -machine
  and -cpu for the guest with no visible difference.

  A native FreeBSD installation on a host with a "v2" Xeon does not have
  the problem, nor do a paravirtualized FreeBSD guests under bhyve (the
  BSD legacy-free hypervisor) using the same FreeBSD disk images as on
  the Linux hosts. So it seems unlikely the cause is on the FreeBSD side
  of things.

  I would greatly appreciate any feedback or developer attention to
  this. I am happy to provide additional details, test patches, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1329956/+subscriptions

Reply via email to