[Bug 106621] New: Failure to install Hyper-V role on nested KVM guest

2015-10-26 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=106621

Bug ID: 106621
   Summary: Failure to install Hyper-V role on nested KVM guest
   Product: Virtualization
   Version: unspecified
Kernel Version: 4.3.0-rc7
  Hardware: x86-64
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: kvm
  Assignee: virtualization_...@kernel-bugs.osdl.org
  Reporter: rainmake...@gmail.com
Regression: No

When installing the Hyper-V role on a Windows guest (tried 2008 R2, 2012 R2 and
2016 TP3), the Windows installer refuses to install because of the error 

"Hyper-V cannot be installed because virtualization support is not enabled in
the BIOS."

This is because the MSR 0x3a is initialized to "0".

When VMX is activated on the guest CPU, the 0x3a register should return "5".

The following code patch (may be a bit of an overstatement) returns "5" if VMX
is set on the guest CPU, thereby reporting to the guest that visualization is
enabled in the BIOS. 

--- a/arch/x86/kvm/vmx.c2015-10-25 02:39:47.0 +0100
+++ b/arch/x86/kvm/vmx.c2015-10-26 13:35:51.894700786 +0100
@@ -2661,7 +2661,12 @@
 case MSR_IA32_FEATURE_CONTROL:
 if (!nested_vmx_allowed(vcpu))
 return 1;
-msr_info->data = to_vmx(vcpu)->nested.msr_ia32_feature_control;
+if (nested_vmx_allowed(vcpu)) {
+//Set all 3 bits in 0x3a
+msr_info->data = 5;
+} else {
+msr_info->data = to_vmx(vcpu)->nested.msr_ia32_feature_control;
+}
 break;
 case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
 if (!nested_vmx_allowed(vcpu))


This, together with "-cpu host,-hypervisor,+vmx will allow Hyper-V to be
installed. It will however not allow to start these Virtual Machines.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 92871] nested kvm - Warning in L0 kernel when trying to launch L2 guest in L1 guest

2015-03-18 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=92871

Radim Krčmář  changed:

   What|Removed |Added

 CC||rkrc...@redhat.com

--- Comment #1 from Radim Krčmář  ---
Fixed with "KVM: nVMX: mask unrestricted_guest if disabled on L0".
(https://lkml.org/lkml/2015/3/17/478)

-- 
You are receiving this mail because:
You are watching the assignee of the bug.--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 94771] [Nested kvm on kvm] 32bit win7 guest as L2 guest show blue screen

2015-03-16 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=94771

Zhou, Chao  changed:

   What|Removed |Added

Summary|[Nested kvm on kvm] 32bit   |[Nested kvm on kvm] 32bit
   |win7 guest as L2 guest sho  |win7 guest as L2 guest show
   |blue screen |blue screen

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 94771] [Nested kvm on kvm] 32bit win7 guest as L2 guest sho blue screen

2015-03-16 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=94771

Bandan Das  changed:

   What|Removed |Added

 Blocks||94971

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 92871] nested kvm - Warning in L0 kernel when trying to launch L2 guest in L1 guest

2015-03-16 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=92871

Bandan Das  changed:

   What|Removed |Added

 Blocks||94971

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 94771] New: [Nested kvm on kvm] 32bit win7 guest as L2 guest sho blue screen

2015-03-11 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=94771

Bug ID: 94771
   Summary: [Nested kvm on kvm] 32bit win7 guest as L2 guest sho
blue screen
   Product: Virtualization
   Version: unspecified
Kernel Version: 4.0.0-rc1
  Hardware: All
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: kvm
  Assignee: virtualization_...@kernel-bugs.osdl.org
  Reporter: chao.z...@intel.com
Regression: No

Environment:

Host OS (ia32/ia32e/IA64):ia32e
Guest OS (ia32/ia32e/IA64):ia32
Guest OS Type (Linux/Windows):Windows
kvm.git Commit:4ff6f8e61eb7f96d3ca535c6d240f863ccd6fb7d
qemu.kvm Commit:d598911b6f5e7bf7bafb63b8e1d074729e94aca7
Host Kernel Version: 4.0.0-rc1
Hardware:Ivytown_EP, Haswell_EP


Bug detailed description:
--
create 32bit win7 guest as L2 guest, the guest will show blue screen.

note:
create 32bit win8 or 32bit win8.1  as L2 guest, the guest boots up fine.


Reproduce steps:

1 create L1 guest:
 qemu-system-x86_64 -enable-kvm -m 8G -smp 4 -net nic,macaddr=00:12:31:34:51:31
-net tap,script=/etc/kvm/qemu-ifup nested-kvm.qcow -cpu host

2. create L2 guest
qemu-system-x86_64 -enable-kvm -m 2G -smp 2 -net none win7-32.qcow

Current result:

32bit win7 as L2 guest boots up fail

Expected result:

32bit win7 as L2 guest boots up fine

Basic root-causing log:
--

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 92871] New: nested kvm - Warning in L0 kernel when trying to launch L2 guest in L1 guest

2015-02-06 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=92871

Bug ID: 92871
   Summary: nested kvm - Warning in L0 kernel when trying to
launch L2 guest in L1 guest
   Product: Virtualization
   Version: unspecified
Kernel Version: 3.19.0-rc7
  Hardware: All
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: kvm
  Assignee: virtualization_...@kernel-bugs.osdl.org
  Reporter: rik.th...@esat.kuleuven.be
Regression: No

Hi,

I've enabled nested KVM on my L0 (host) kernel and have created a guest with
the CPU model copied from the host. Inside this guest I've installed libvirt
and am trying to create a KVM guest (L2 guest). As soon as virt-manager tries
to create the domain, the L1 guest reboots and the following WARNING is logged
by the L0 kernel:

Feb 06 20:35:18 saturn kernel: [ cut here ]
Feb 06 20:35:18 saturn kernel: WARNING: CPU: 0 PID: 2352 at
arch/x86/kvm/vmx.c:9190 nested_vmx_vmexit+0x7fe/0x890 [kvm_intel]()
Feb 06 20:35:18 saturn kernel: Modules linked in: vhost_net vhost macvtap
macvlan tun xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4
iptable_nat nf_nat_ipv4 nf_nat ipt_REJECT nf_reject_ipv4 bridge stp llc bnep
bluetooth binfmt_misc cpufreq_userspace cpufreq_powersave cpufreq_conservative
cpufreq_stats nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables
xt_tcpudp nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack
iptable_filter ip_tables x_tables snd_hda_codec_hdmi cx88_blackbird cx2341x
cx22702 cx88_dvb cx88_vp3054_i2c videobuf2_dvb dvb_core wm8775 tuner_simple
tuner_types iTCO_wdt iTCO_vendor_support tda9887 tda8290 tuner
snd_hda_codec_via snd_hda_codec_generic evdev snd_hda_intel cx8800
snd_hda_controller nouveau cx8802 snd_hda_codec videobuf2_dma_sg joydev
cx88_alsa coretemp snd_hwdep
Feb 06 20:35:18 saturn kernel:  videobuf2_memops video mxm_wmi cx88xx wmi
videobuf2_core tveeprom rc_core v4l2_common videodev media kvm_intel
drm_kms_helper ttm drm snd_pcm arc4 kvm psmouse rt61pci eeprom_93cx6 rt2x00pci
rt2x00mmio snd_timer rt2x00lib snd mac80211 cfg80211 i2c_algo_bit i2c_i801
soundcore serio_raw rfkill i7core_edac edac_core 8250_fintek xhci_pci xhci_hcd
i2c_core lpc_ich mfd_core asus_atk0110 shpchp button acpi_cpufreq processor
thermal_sys loop firewire_sbp2 fuse parport_pc ppdev lp parport autofs4 ext4
crc16 mbcache jbd2 btrfs xor raid6_pq dm_mod raid1 raid0 md_mod sg sr_mod cdrom
sd_mod hid_generic usbhid hid usb_storage crc32c_intel firewire_ohci ahci
libahci firewire_core crc_itu_t libata scsi_mod ehci_pci ehci_hcd r8169 mii
usbcore usb_common
Feb 06 20:35:18 saturn kernel: CPU: 0 PID: 2352 Comm: qemu-system-x86 Not
tainted 3.19.0-rc7 #1
Feb 06 20:35:18 saturn kernel: Hardware name: System manufacturer System
Product Name/P7P55D-E, BIOS 150412/14/2010
Feb 06 20:35:18 saturn kernel:   a06f19fa
815357b8 
Feb 06 20:35:18 saturn kernel:  8106cde1 8801e4f53000
 0014
Feb 06 20:35:18 saturn kernel:   
a06de3ce 
Feb 06 20:35:18 saturn kernel: Call Trace:  
Feb 06 20:35:18 saturn kernel:  [] ? dump_stack+0x40/0x50 
Feb 06 20:35:18 saturn kernel:  [] ?
warn_slowpath_common+0x81/0xb0  
Feb 06 20:35:18 saturn kernel:  [] ?
nested_vmx_vmexit+0x7fe/0x890 [kvm_intel]   
Feb 06 20:35:18 saturn kernel:  [] ?
kvm_arch_vcpu_ioctl_run+0xd7b/0x1220 [kvm]
Feb 06 20:35:18 saturn kernel:  [] ?
kvm_arch_vcpu_load+0x4c/0x1f0 [kvm]
Feb 06 20:35:18 saturn kernel:  [] ?
kvm_vcpu_ioctl+0x322/0x5d0 [kvm]
Feb 06 20:35:18 saturn kernel:  [] ?
set_next_entity+0x56/0x70
Feb 06 20:35:18 saturn kernel:  [] ? __switch_to+0x440/0x5e0
Feb 06 20:35:18 saturn kernel:  [] ? do_vfs_ioctl+0x2e8/0x4f0
Feb 06 20:35:18 saturn kernel:  [] ?
__audit_syscall_entry+0xbc/0x110
Feb 06 20:35:18 saturn kernel:  [] ?
syscall_trace_enter_phase1+0xfb/0x160
Feb 06 20:35:18 saturn kernel:  [] ?
kvm_on_user_return+0x44/0x80 [kvm]
Feb 06 20:35:18 saturn kernel:  [] ? SyS_ioctl+0x81/0xa0
Feb 06 20:35:18 saturn kernel:  [] ? int_signal+0x12/0x17
Feb 06 20:35:18 saturn kernel:  [] ?
system_call_fastpath+0x16/0x1b
Feb 06 20:35:18 saturn kernel: ---[ end trace e7e11898e469021e ]---

I first hit this bug on the Debian 3.16.7-ckt4-3 kernel, but it's the first
kernel I tried so the bug might be older.

This system has the following processor type:

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 30
model name  : Intel(R) Core(TM) i5 CPU 750  @ 2.67GHz
stepping: 5
microcode 

Re: nested KVM slower than QEMU with gnumach guest kernel

2014-12-15 Thread Paolo Bonzini


On 15/12/2014 01:09, Samuel Thibault wrote:
> Hello,
> 
> Just FTR, it seems that the overhead is due to gnumach somtimes using
> the PIC quite a lot.  It used not to be too much a concern with just
> kvm, but kvm on kvm becomes too expensive for that.  I've fixed gnumach
> into being a lot more reasonable, and the performance issues got away.

Thanks!

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-12-14 Thread Samuel Thibault
Hello,

Just FTR, it seems that the overhead is due to gnumach somtimes using
the PIC quite a lot.  It used not to be too much a concern with just
kvm, but kvm on kvm becomes too expensive for that.  I've fixed gnumach
into being a lot more reasonable, and the performance issues got away.

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-23 Thread Samuel Thibault
reason VMREAD rip 0xa02fcce5 info 0 0
vmx_cache_reg

When running the same kind of operation with non-nested KVM, I get this
kind of trace:

qemu-system-x86-3667  [000]  1399.213498: kvm_exit: 
reason IO_INSTRUCTION rip 0x801090c8 info 210040 0
qemu-system-x86-3667  [000]  1399.213498: kvm_pio:  
pio_write at 0x21 size 1 count 1 val 0xff
qemu-system-x86-3667  [000]  1399.213499: rcu_utilization:  
Start context switch
qemu-system-x86-3667  [000]  1399.213499: rcu_utilization:  
End context switch
qemu-system-x86-3667  [000]  1399.213499: kvm_entry:
vcpu 0
qemu-system-x86-3667  [000]  1399.213500: kvm_exit: 
reason IO_INSTRUCTION rip 0x801090d1 info a10040 0
qemu-system-x86-3667  [000]  1399.213500: kvm_pio:  
pio_write at 0xa1 size 1 count 1 val 0xff 
qemu-system-x86-3667  [000]  1399.213500: rcu_utilization:  
Start context switch
qemu-system-x86-3667  [000]  1399.213501: rcu_utilization:  
End context switch
qemu-system-x86-3667  [000]  1399.213501: kvm_entry:
vcpu 0
qemu-system-x86-3667  [000]  1399.213501: kvm_exit: 
reason IO_INSTRUCTION rip 0x80108f5d info 210040 0
qemu-system-x86-3667  [000]  1399.213501: kvm_pio:  
pio_write at 0x21 size 1 count 1 val 0x68 
qemu-system-x86-3667  [000]  1399.213502: rcu_utilization:  
Start context switch
qemu-system-x86-3667  [000]  1399.213502: rcu_utilization:  
End context switch
qemu-system-x86-3667  [000]  1399.213502: kvm_entry:
vcpu 0

i.e. just one kvm_exit per guest IO instruction, not 18 like above, are
those really expected?

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-23 Thread Samuel Thibault
Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit :
> On 2014-11-17 10:03, Samuel Thibault wrote:
> > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit :
> >> Do you know how gnumach timekeeping works? Does it have a timer that fires 
> >> each 1ms?
> >> Which clock device is it using?
> > 
> > It uses the PIT every 10ms, in square mode
> > (PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36).
> 
> Wow... how retro. That feature might be unsupported

(BTW, I tried the more common ndiv mode, 0x34, with the same result)

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Also, I have made gnumach show a timer counter, it does get PIT
interrupts every 10ms as expected, not more often.

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Gleb Natapov, le Mon 17 Nov 2014 11:21:22 +0200, a écrit :
> On Mon, Nov 17, 2014 at 10:10:25AM +0100, Samuel Thibault wrote:
> > Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit :
> > > On 2014-11-17 10:03, Samuel Thibault wrote:
> > > > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit :
> > > >> Do you know how gnumach timekeeping works? Does it have a timer that 
> > > >> fires each 1ms?
> > > >> Which clock device is it using?
> > > > 
> > > > It uses the PIT every 10ms, in square mode
> > > > (PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36).
> > > 
> > > Wow... how retro. That feature might be unsupported - does user space
> > > irqchip work better?
> > 
> > I had indeed tried giving -machine kernel_irqchip=off to the L2 kvm,
> > with the same bad performance and external_interrupt in the trace.
> > 
> They are always be in the trace, but do you see them each ms or each 10ms
> with user space irqchip?

The external interupts are every 1 *microsecond, not millisecond. With
irqchip=off or not.

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Gleb Natapov
On Mon, Nov 17, 2014 at 10:10:25AM +0100, Samuel Thibault wrote:
> Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit :
> > On 2014-11-17 10:03, Samuel Thibault wrote:
> > > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit :
> > >> Do you know how gnumach timekeeping works? Does it have a timer that 
> > >> fires each 1ms?
> > >> Which clock device is it using?
> > > 
> > > It uses the PIT every 10ms, in square mode
> > > (PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36).
> > 
> > Wow... how retro. That feature might be unsupported - does user space
> > irqchip work better?
> 
> I had indeed tried giving -machine kernel_irqchip=off to the L2 kvm,
> with the same bad performance and external_interrupt in the trace.
> 
They are always be in the trace, but do you see them each ms or each 10ms
with user space irqchip?

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Jan Kiszka, le Mon 17 Nov 2014 10:04:37 +0100, a écrit :
> On 2014-11-17 10:03, Samuel Thibault wrote:
> > Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit :
> >> Do you know how gnumach timekeeping works? Does it have a timer that fires 
> >> each 1ms?
> >> Which clock device is it using?
> > 
> > It uses the PIT every 10ms, in square mode
> > (PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36).
> 
> Wow... how retro. That feature might be unsupported - does user space
> irqchip work better?

I had indeed tried giving -machine kernel_irqchip=off to the L2 kvm,
with the same bad performance and external_interrupt in the trace.

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Jan Kiszka
On 2014-11-17 10:03, Samuel Thibault wrote:
> Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit :
>> Do you know how gnumach timekeeping works? Does it have a timer that fires 
>> each 1ms?
>> Which clock device is it using?
> 
> It uses the PIT every 10ms, in square mode
> (PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36).

Wow... how retro. That feature might be unsupported - does user space
irqchip work better?

Jan




signature.asc
Description: OpenPGP digital signature


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Gleb Natapov, le Mon 17 Nov 2014 10:58:45 +0200, a écrit :
> Do you know how gnumach timekeeping works? Does it have a timer that fires 
> each 1ms?
> Which clock device is it using?

It uses the PIT every 10ms, in square mode
(PIT_C0|PIT_SQUAREMODE|PIT_READMODE = 0x36).

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Gleb Natapov
On Sun, Nov 16, 2014 at 11:18:28PM +0100, Samuel Thibault wrote:
> Hello,
> 
> Jan Kiszka, le Wed 12 Nov 2014 00:42:52 +0100, a écrit :
> > On 2014-11-11 19:55, Samuel Thibault wrote:
> > > jenkins.debian.net is running inside a KVM VM, and it runs nested
> > > KVM guests for its installation attempts.  This goes fine with Linux
> > > kernels, but it is extremely slow with gnumach kernels.
> 
> > You can try to catch a trace (ftrace) on the physical host.
> > 
> > I suspect the setup forces a lot of instruction emulation, either on L0
> > or L1. And that is slower than QEMU is KVM does not optimize like QEMU does.
> 
> Here is a sample of trace-cmd output dump: the same kind of pattern
> repeats over and over, with EXTERNAL_INTERRUPT happening mostly
> every other microsecond:
> 
>  qemu-system-x86-9752  [003]  4106.187755: kvm_exit: reason 
> EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
>  qemu-system-x86-9752  [003]  4106.187756: kvm_entry:vcpu 0
>  qemu-system-x86-9752  [003]  4106.187757: kvm_exit: reason 
> EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
>  qemu-system-x86-9752  [003]  4106.187758: kvm_entry:vcpu 0
>  qemu-system-x86-9752  [003]  4106.187759: kvm_exit: reason 
> EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
>  qemu-system-x86-9752  [003]  4106.187760: kvm_entry:vcpu 0
> 
> The various functions being interrupted are vmx_vcpu_run
> (0xa02848b1 and 0xa0284972), handle_io
> (0xa027ee62), vmx_get_cpl (0xa027a7de),
> load_vmc12_host_state (0xa027ea31), native_read_tscp
> (0x81050a84), native_write_msr_safe (0x81050aa6),
> vmx_decache_cr0_guest_bits (0xa027a384),
> vmx_handle_external_intr (0xa027a54d).
> 
> AIUI, the external interrupt is 0xf6, i.e. Linux' IRQ_WORK_VECTOR.  I
> however don't see any of them, neither in L0's /proc/interrupts, nor in
> L1's /proc/interrupts...
> 
Do you know how gnumach timekeeping works? Does it have a timer that fires each 
1ms?
Which clock device is it using?

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-17 Thread Samuel Thibault
Jan Kiszka, le Mon 17 Nov 2014 07:28:23 +0100, a écrit :
> > AIUI, the external interrupt is 0xf6, i.e. Linux' IRQ_WORK_VECTOR.  I
> > however don't see any of them, neither in L0's /proc/interrupts, nor in
> > L1's /proc/interrupts...
> 
> I suppose this is a SMP host and guest?

L0 is a hyperthreaded quad-core, but L1 is only 1 VCPU.  In the trace,
L1 happens to have been apparently always scheduled on the same L0 CPU:
trace-cmd tells me that CPU [0-24-7] are empty.

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-16 Thread Jan Kiszka
On 2014-11-16 23:18, Samuel Thibault wrote:
> Hello,
> 
> Jan Kiszka, le Wed 12 Nov 2014 00:42:52 +0100, a écrit :
>> On 2014-11-11 19:55, Samuel Thibault wrote:
>>> jenkins.debian.net is running inside a KVM VM, and it runs nested
>>> KVM guests for its installation attempts.  This goes fine with Linux
>>> kernels, but it is extremely slow with gnumach kernels.
> 
>> You can try to catch a trace (ftrace) on the physical host.
>>
>> I suspect the setup forces a lot of instruction emulation, either on L0
>> or L1. And that is slower than QEMU is KVM does not optimize like QEMU does.
> 
> Here is a sample of trace-cmd output dump: the same kind of pattern
> repeats over and over, with EXTERNAL_INTERRUPT happening mostly
> every other microsecond:
> 
>  qemu-system-x86-9752  [003]  4106.187755: kvm_exit: reason 
> EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
>  qemu-system-x86-9752  [003]  4106.187756: kvm_entry:vcpu 0
>  qemu-system-x86-9752  [003]  4106.187757: kvm_exit: reason 
> EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
>  qemu-system-x86-9752  [003]  4106.187758: kvm_entry:vcpu 0
>  qemu-system-x86-9752  [003]  4106.187759: kvm_exit: reason 
> EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
>  qemu-system-x86-9752  [003]  4106.187760: kvm_entry:vcpu 0

You may want to turn on more trace events, if not all, to possibly see
what Linux does then. The next level after that is function tracing (may
require a kernel rebuild or a tracing kernel of the distro).

> 
> The various functions being interrupted are vmx_vcpu_run
> (0xa02848b1 and 0xa0284972), handle_io
> (0xa027ee62), vmx_get_cpl (0xa027a7de),
> load_vmc12_host_state (0xa027ea31), native_read_tscp
> (0x81050a84), native_write_msr_safe (0x81050aa6),
> vmx_decache_cr0_guest_bits (0xa027a384),
> vmx_handle_external_intr (0xa027a54d).
> 
> AIUI, the external interrupt is 0xf6, i.e. Linux' IRQ_WORK_VECTOR.  I
> however don't see any of them, neither in L0's /proc/interrupts, nor in
> L1's /proc/interrupts...

I suppose this is a SMP host and guest? Does reducing CPUs to 1 change
to picture? If not, it may help to understand cause and effect easier.

Jan




signature.asc
Description: OpenPGP digital signature


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-16 Thread Samuel Thibault
Hello,

Jan Kiszka, le Wed 12 Nov 2014 00:42:52 +0100, a écrit :
> On 2014-11-11 19:55, Samuel Thibault wrote:
> > jenkins.debian.net is running inside a KVM VM, and it runs nested
> > KVM guests for its installation attempts.  This goes fine with Linux
> > kernels, but it is extremely slow with gnumach kernels.

> You can try to catch a trace (ftrace) on the physical host.
> 
> I suspect the setup forces a lot of instruction emulation, either on L0
> or L1. And that is slower than QEMU is KVM does not optimize like QEMU does.

Here is a sample of trace-cmd output dump: the same kind of pattern
repeats over and over, with EXTERNAL_INTERRUPT happening mostly
every other microsecond:

 qemu-system-x86-9752  [003]  4106.187755: kvm_exit: reason 
EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
 qemu-system-x86-9752  [003]  4106.187756: kvm_entry:vcpu 0
 qemu-system-x86-9752  [003]  4106.187757: kvm_exit: reason 
EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
 qemu-system-x86-9752  [003]  4106.187758: kvm_entry:vcpu 0
 qemu-system-x86-9752  [003]  4106.187759: kvm_exit: reason 
EXTERNAL_INTERRUPT rip 0xa02848b1 info 0 80f6
 qemu-system-x86-9752  [003]  4106.187760: kvm_entry:vcpu 0

The various functions being interrupted are vmx_vcpu_run
(0xa02848b1 and 0xa0284972), handle_io
(0xa027ee62), vmx_get_cpl (0xa027a7de),
load_vmc12_host_state (0xa027ea31), native_read_tscp
(0x81050a84), native_write_msr_safe (0x81050aa6),
vmx_decache_cr0_guest_bits (0xa027a384),
vmx_handle_external_intr (0xa027a54d).

AIUI, the external interrupt is 0xf6, i.e. Linux' IRQ_WORK_VECTOR.  I
however don't see any of them, neither in L0's /proc/interrupts, nor in
L1's /proc/interrupts...

Samuel


trace.bz2
Description: Binary data


Re: nested KVM slower than QEMU with gnumach guest kernel

2014-11-11 Thread Jan Kiszka
On 2014-11-11 19:55, Samuel Thibault wrote:
> Hello,
> 
> jenkins.debian.net is running inside a KVM VM, and it runs nested
> KVM guests for its installation attempts.  This goes fine with Linux
> kernels, but it is extremely slow with gnumach kernels.  I have
> reproduced the issue with my laptop with a linux 3.17 host kernel, a
> 3.16 L1-guest kernel, and an i7-2720QM CPU, with similar results; it's
> actually even slower than letting qemu emulate the CPU... For these
> tests I'm using the following image:
> 
> http://people.debian.org/~sthibault/tmp/netinst.iso
> 
> The reference test here boils down to running qemu -cdrom netinst.iso -m
> 512, choosing the "Automated install" choice, and waiting for "Loading
> additional components" step to complete. (yes, the boot menu gets
> mangled ATM, there's apparently currently a bug between qemu and grub)
> 
> My host is A, my level1-KVM-guest is B.
> 
> KVM:
> A$ qemu -enable-kvm -cdrom netinst.iso -m 512M
> takes ~1 minute.
> 
> QEMU:
> A$ qemu -cdrom netinst.iso -m 512M
> takes ~7 minutes.
> 
> KVM-in-KVM:
> B$ qemu -enable-kvm -cdrom netinst.iso -m 512M
> takes ~10 minutes, when it doesn't gets completely stuck, which is quite
> often, actually...
> 
> QEMU-in-KVM:
> B$ qemu -cdrom netinst.iso -m 512M
> takes ~7 minutes.
> 
> I don't see such horrible slowdown with a linux image.  Is there
> something particular that could explain such a difference?  What tools
> or counters could I use to investigate which area of KVM is getting
> slow?

You can try to catch a trace (ftrace) on the physical host.

I suspect the setup forces a lot of instruction emulation, either on L0
or L1. And that is slower than QEMU is KVM does not optimize like QEMU does.

Jan



signature.asc
Description: OpenPGP digital signature


nested KVM slower than QEMU with gnumach guest kernel

2014-11-11 Thread Samuel Thibault
Hello,

jenkins.debian.net is running inside a KVM VM, and it runs nested
KVM guests for its installation attempts.  This goes fine with Linux
kernels, but it is extremely slow with gnumach kernels.  I have
reproduced the issue with my laptop with a linux 3.17 host kernel, a
3.16 L1-guest kernel, and an i7-2720QM CPU, with similar results; it's
actually even slower than letting qemu emulate the CPU... For these
tests I'm using the following image:

http://people.debian.org/~sthibault/tmp/netinst.iso

The reference test here boils down to running qemu -cdrom netinst.iso -m
512, choosing the "Automated install" choice, and waiting for "Loading
additional components" step to complete. (yes, the boot menu gets
mangled ATM, there's apparently currently a bug between qemu and grub)

My host is A, my level1-KVM-guest is B.

KVM:
A$ qemu -enable-kvm -cdrom netinst.iso -m 512M
takes ~1 minute.

QEMU:
A$ qemu -cdrom netinst.iso -m 512M
takes ~7 minutes.

KVM-in-KVM:
B$ qemu -enable-kvm -cdrom netinst.iso -m 512M
takes ~10 minutes, when it doesn't gets completely stuck, which is quite
often, actually...

QEMU-in-KVM:
B$ qemu -cdrom netinst.iso -m 512M
takes ~7 minutes.

I don't see such horrible slowdown with a linux image.  Is there
something particular that could explain such a difference?  What tools
or counters could I use to investigate which area of KVM is getting
slow?

Samuel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-20 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

Zhou, Chao  changed:

   What|Removed |Added

 Status|RESOLVED|VERIFIED

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-20 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

Zhou, Chao  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |CODE_FIX

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-20 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

--- Comment #5 from Zhou, Chao  ---
this commit fixed the bug:
commit d9f89b88f5102ce235b75a5907838e3c7ed84b97
Author: Jan Kiszka 
Date:   Sat May 10 09:24:34 2014 +0200

KVM: x86: Fix CR3 reserved bits check in long mode

Regression of 346874c9: PAE is set in long mode, but that does not mean
we have valid PDPTRs.

Signed-off-by: Jan Kiszka 
Signed-off-by: Paolo Bonzini 

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-20 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

--- Comment #4 from Zhou, Chao  ---
kvm.git + qemu.git: d9f89b88_e5bfd640
host kernel:3.15.0_rc1
test on Romley_EP, create a 64bit rhel6u5 guest as L2 guest, the L2 guest boots
up fine.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-12 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

Paolo Bonzini  changed:

   What|Removed |Added

 CC||bonz...@gnu.org

--- Comment #3 from Paolo Bonzini  ---
I applied that patch to kvm/next and kvm/queue, however I haven't yet tested
nested virt so I'm not yet closing.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-12 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

Jan Kiszka  changed:

   What|Removed |Added

 CC||jan.kis...@web.de

--- Comment #2 from Jan Kiszka  ---
Might be fixed by "KVM: x86: Fix CR3 reserved bits check in long mode"
(http://thread.gmane.org/gmane.linux.kernel/1700662).

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-11 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

--- Comment #1 from Zhou, Chao  ---
the first bad commit is:
commit 346874c9507a2582d0c00021f848de6e115f276c
Author: Nadav Amit 
Date:   Fri Apr 18 03:35:09 2014 +0300

KVM: x86: Fix CR3 reserved bits

According to Intel specifications, PAE and non-PAE does not have any
reserved
bits.  In long-mode, regardless to PCIDE, only the high bits (above the
physical address) are reserved.

Signed-off-by: Nadav Amit 
Signed-off-by: Marcelo Tosatti 

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 75981] New: [Nested kvm on kvm]L2 guest reboot continuously when create a rhel6u5(64bit) as L2 guest.

2014-05-11 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=75981

Bug ID: 75981
   Summary: [Nested kvm on kvm]L2 guest reboot continuously when
create a rhel6u5(64bit) as L2 guest.
   Product: Virtualization
   Version: unspecified
Kernel Version: 3.15.0-rc1
  Hardware: x86-64
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: kvm
  Assignee: virtualization_...@kernel-bugs.osdl.org
  Reporter: chao.z...@intel.com
Regression: No

Environment:

Host OS (ia32/ia32e/IA64):ia32e
Guest OS (ia32/ia32e/IA64):ia32e
Guest OS Type (Linux/Windows):Linux
kvm.git Commit:198c74f43f0f5473f99967aead30ddc622804bc1
qemu.git Commit:411f491e0af173cf8f39347574941bd26fbae381
Host Kernel Version:3.15.0-rc1
Hardware:Romley_EP


Bug detailed description:
--
when create L1 guest with "-cpu host" , then create a 64bit rhel6u5 guest as L2
guest, the L2 guest reboot continuously.

note:
1. create a 64bit RHEL6u4 as L2 guest, the guest reboot continuously.
2. when creat a 32bit rhel6u5 guest as L2 guest, the L2 guest works fine
3. this should be a kernel bug:
kvm  +  qemu = result
198c74f4 + 411f491e  = bad
0f689a33 + 411f491e  = good


Reproduce steps:

1. create L1 guest:
qemu-system-x86_64 -enable-kvm -m 8G -smp 4 -net nic,macaddr=00:12:41:51:14:16
-net tap,script=/etc/kvm/qemu-ifup ia32e_nested-kvm.img -cpu host,level=9
2. create L2 guest
qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -net none ia32e_rhel6u4.img


Current result:

64bit rhel6u5 as L2 guest reboot continuously

Expected result:

64bit rhel6u5 as L2 guest works fine

Basic root-causing log:
--

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-14 Thread Richard W.M. Jones
On Fri, Mar 14, 2014 at 04:39:48PM +0400, Vasiliy Tolstov wrote:
> 2014-03-14 16:16 GMT+04:00 Richard W.M. Jones :
> > You can set the VM .  Of course it'll run quite
> > slowly.
> >
> >> is that possible to debug this issue ? How can i help?
> >
> > Complete logs from the guest.
> > Any messages from qemu or the host.
> > & put all of that into a full bug report.
> 
> 
> Where i can find submission form for bug report? (I'm using exherbo
> linux, but it does not have like debian or sles personal patches and
> using only upstream)

I suspect this is going to be a kernel bug, in which case:

https://bugzilla.kernel.org/

For libvirt bugs it would be:

https://bugzilla.redhat.com/enter_bug.cgi?component=libvirt&product=Virtualization+Tools

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-14 Thread Vasiliy Tolstov
2014-03-14 16:16 GMT+04:00 Richard W.M. Jones :
> You can set the VM .  Of course it'll run quite
> slowly.
>
>> is that possible to debug this issue ? How can i help?
>
> Complete logs from the guest.
> Any messages from qemu or the host.
> & put all of that into a full bug report.


Where i can find submission form for bug report? (I'm using exherbo
linux, but it does not have like debian or sles personal patches and
using only upstream)

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-14 Thread Richard W.M. Jones
On Fri, Mar 14, 2014 at 04:11:13PM +0400, Vasiliy Tolstov wrote:
> 2014-03-14 15:58 GMT+04:00 Richard W.M. Jones :
> > It could be there is another, less frequent, bug in nested KVM.
> > I'm assuming this is on Intel hardware?
> >
> > From the libguestfs point of view what you can do is to force TCG:
> >
> > export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
> >
> > Unfortunately this only has an effect in libguestfs >= 1.25.24.  We're
> > going to have the new version in Fedora 20 real soon -- probably
> > before the end of this month.  Or you can compile the Rawhide version
> > on F20.
> 
> 
> Thanks for answer. I'm not using libguestfs. I'm try tun vm inside vm
> via libvirt.

You can set the VM .  Of course it'll run quite
slowly.

> is that possible to debug this issue ? How can i help?

Complete logs from the guest.
Any messages from qemu or the host.
& put all of that into a full bug report.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-14 Thread Vasiliy Tolstov
2014-03-14 15:58 GMT+04:00 Richard W.M. Jones :
> It could be there is another, less frequent, bug in nested KVM.
> I'm assuming this is on Intel hardware?
>
> From the libguestfs point of view what you can do is to force TCG:
>
> export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
>
> Unfortunately this only has an effect in libguestfs >= 1.25.24.  We're
> going to have the new version in Fedora 20 real soon -- probably
> before the end of this month.  Or you can compile the Rawhide version
> on F20.


Thanks for answer. I'm not using libguestfs. I'm try tun vm inside vm
via libvirt.
is that possible to debug this issue ? How can i help?
P.S. Yes i'm using intel hardware.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-14 Thread Richard W.M. Jones
On Fri, Mar 14, 2014 at 03:52:03PM +0400, Vasiliy Tolstov wrote:
> If i use 3.13.6 kernel that have alredy this patch, but sometimes i
> get kernel panic, what can i do?
> P.S. I'm using nested virt, fault from L2

It could be there is another, less frequent, bug in nested KVM.
I'm assuming this is on Intel hardware?

>From the libguestfs point of view what you can do is to force TCG:

export LIBGUESTFS_BACKEND_SETTINGS=force_tcg

Unfortunately this only has an effect in libguestfs >= 1.25.24.  We're
going to have the new version in Fedora 20 real soon -- probably
before the end of this month.  Or you can compile the Rawhide version
on F20.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-14 Thread Vasiliy Tolstov
2014-03-07 1:59 GMT+04:00 Richard W.M. Jones :
> On Tue, Mar 04, 2014 at 09:13:40AM +0100, Paolo Bonzini wrote:
>> Il 04/03/2014 03:40, Ian Pilcher ha scritto:
>> >Is this a known problem?  I just tried using nested vmx for the first
>> >time since upgrading my system from F19 (3.12.?? at the time) to F20,
>> >and I cannot start any L2 guests.  The L2 guest appears to hang almost
>> >immediately after starting, consuming 100% of one of the L1 guest's
>> >VCPUs.
>> >
>> >If I reboot with kernel-3.12.10-300.fc20.x86_64, the problem does not
>> >occur.
>> >
>> >Any known workaround?  (Other than using 3.12.10?)
>>
>> There is a fix on the way to the 3.13 kernel.
>>
>> You can open a Fedora bug and ask them to include
>> http://article.gmane.org/gmane.linux.kernel.stable/82043/raw in the
>> kernel.
>
> Thanks for fixing this.  It affects a lot of libguestfs users too.
>
> I opened this bug:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1073663
>
> Rich.


If i use 3.13.6 kernel that have alredy this patch, but sometimes i
get kernel panic, what can i do?
P.S. I'm using nested virt, fault from L2

[   10.942007] PANIC: double fault, error_code: 0x0
[   10.942007] CPU: 0 PID: 182 Comm: systemd-journal Not tainted 3.13.6 #3
[   10.942007] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[   10.942007] task: 88001cc08000 ti: 88001d70e000 task.ti:
88001d70e000
[   10.942007] RIP: 0033:[<7fe61b2fce8a>]  [<7fe61b2fce8a>]
0x7fe61b2fce8a
[   10.942007] RSP: 002b:7fffee7468d8  EFLAGS: 00010286
[   10.942007] RAX:  RBX: 0043344e RCX: 00430a70
[   10.942007] RDX: 0010 RSI:  RDI: 00430a70
[   10.942007] RBP: 7fffee747130 R08: 0003 R09: 7fe61be81780
[   10.942007] R10:  R11: 0246 R12: 0001
[   10.942007] R13: 01c9c380 R14: 0003 R15: 7fffee747148
[   10.942007] FS:  7fe61be81780() GS:88001f80()
knlGS:
[   10.942007] CS:  0010 DS:  ES:  CR0: 80050033
[   10.942007] CR2:  CR3: 1e1a2000 CR4: 06f0
[   10.942007]
[   10.942007] Kernel panic - not syncing: Machine halted.


-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-06 Thread Richard W.M. Jones
On Tue, Mar 04, 2014 at 09:13:40AM +0100, Paolo Bonzini wrote:
> Il 04/03/2014 03:40, Ian Pilcher ha scritto:
> >Is this a known problem?  I just tried using nested vmx for the first
> >time since upgrading my system from F19 (3.12.?? at the time) to F20,
> >and I cannot start any L2 guests.  The L2 guest appears to hang almost
> >immediately after starting, consuming 100% of one of the L1 guest's
> >VCPUs.
> >
> >If I reboot with kernel-3.12.10-300.fc20.x86_64, the problem does not
> >occur.
> >
> >Any known workaround?  (Other than using 3.12.10?)
> 
> There is a fix on the way to the 3.13 kernel.
> 
> You can open a Fedora bug and ask them to include
> http://article.gmane.org/gmane.linux.kernel.stable/82043/raw in the
> kernel.

Thanks for fixing this.  It affects a lot of libguestfs users too.

I opened this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1073663

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-04 Thread Ian Pilcher
On 03/04/2014 03:30 AM, Kashyap Chamarthy wrote:
> If you want to try, I made a Fedora Kernel scratch build (i.e. not
> official) with fix Paolo pointed to below and this works for me:
> 
>   http://koji.fedoraproject.org/koji/taskinfo?taskID=6577700

Works here.  Thanks!

-- 

Ian Pilcher arequip...@gmail.com
   Sent from the cloud -- where it's already tomorrow

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-04 Thread Kashyap Chamarthy
On Tue, Mar 04, 2014 at 03:00:22PM +0530, Kashyap Chamarthy wrote:
> On Tue, Mar 04, 2014 at 09:13:40AM +0100, Paolo Bonzini wrote:
> > Il 04/03/2014 03:40, Ian Pilcher ha scritto:
> > >Is this a known problem?  I just tried using nested vmx for the first
> > >time since upgrading my system from F19 (3.12.?? at the time) to F20,
> > >and I cannot start any L2 guests.  The L2 guest appears to hang almost
> > >immediately after starting, consuming 100% of one of the L1 guest's
> > >VCPUs.
> > >
> > >If I reboot with kernel-3.12.10-300.fc20.x86_64, the problem does not
> > >occur.

Err, I missed to read this. Sorry about that.

> > >
> > >Any known workaround?  (Other than using 3.12.10?)
> 
> If you want to try, I made a Fedora Kernel scratch build (i.e. not
> official) with fix Paolo pointed to below and this works for me:
> 
>   http://koji.fedoraproject.org/koji/taskinfo?taskID=6577700
> 
> (NOTE: Fedora Scratch build URLs won't last more than 10 days or so)
> 
> > 
> > There is a fix on the way to the 3.13 kernel.
> > 
> > You can open a Fedora bug and ask them to include
> > http://article.gmane.org/gmane.linux.kernel.stable/82043/raw in the
> > kernel.
> > 
> > Paolo
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> -- 
> /kashyap

-- 
/kashyap
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-04 Thread Kashyap Chamarthy
On Tue, Mar 04, 2014 at 09:13:40AM +0100, Paolo Bonzini wrote:
> Il 04/03/2014 03:40, Ian Pilcher ha scritto:
> >Is this a known problem?  I just tried using nested vmx for the first
> >time since upgrading my system from F19 (3.12.?? at the time) to F20,
> >and I cannot start any L2 guests.  The L2 guest appears to hang almost
> >immediately after starting, consuming 100% of one of the L1 guest's
> >VCPUs.
> >
> >If I reboot with kernel-3.12.10-300.fc20.x86_64, the problem does not
> >occur.
> >
> >Any known workaround?  (Other than using 3.12.10?)

If you want to try, I made a Fedora Kernel scratch build (i.e. not
official) with fix Paolo pointed to below and this works for me:

  http://koji.fedoraproject.org/koji/taskinfo?taskID=6577700

(NOTE: Fedora Scratch build URLs won't last more than 10 days or so)

> 
> There is a fix on the way to the 3.13 kernel.
> 
> You can open a Fedora bug and ask them to include
> http://article.gmane.org/gmane.linux.kernel.stable/82043/raw in the
> kernel.
> 
> Paolo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
/kashyap
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [fedora-virt] 3.13 - Nested KVM (vmx) totally broken?

2014-03-04 Thread Paolo Bonzini

Il 04/03/2014 03:40, Ian Pilcher ha scritto:

Is this a known problem?  I just tried using nested vmx for the first
time since upgrading my system from F19 (3.12.?? at the time) to F20,
and I cannot start any L2 guests.  The L2 guest appears to hang almost
immediately after starting, consuming 100% of one of the L1 guest's
VCPUs.

If I reboot with kernel-3.12.10-300.fc20.x86_64, the problem does not
occur.

Any known workaround?  (Other than using 3.12.10?)


There is a fix on the way to the 3.13 kernel.

You can open a Fedora bug and ask them to include 
http://article.gmane.org/gmane.linux.kernel.stable/82043/raw in the kernel.


Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough in nested kvm

2012-01-06 Thread Alex Williamson
On Thu, 2012-01-05 at 23:50 -0500, Tian Fang wrote:
> Hi,
> 
> Nested kvm is supported. Wondering if a PCI device is able to be
> passed through into the nested kvm. Could some experts share some
> insides?

No, there's no iommu exposed to the L1 guest, so there's no way to
program the iommu for the L2 guest.  You would also be bouncing
interrupts from the host to the L1 guest to the L2 guest, so you could
expect a performance hit with each level you add.

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


PCI passthrough in nested kvm

2012-01-05 Thread Tian Fang
Hi,

Nested kvm is supported. Wondering if a PCI device is able to be
passed through into the nested kvm. Could some experts share some
insides?

Thx,
Tian Fang
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Nested KVM: Inner VM fails to execute /init

2011-09-01 Thread Steffen Gebert

Hi all,

I started playing with nested KVMs, but the inner VM doesn't boot. It 
fails after grub's "Starting up ..." with

> Failed to execute /init
> Kernel panic - not syncing: No init found. Try passing init= option
> to kernel. See Linux Documentation/init.txt for guidance.
> Pid: 1, comm: init Not tainted 2.6.38-11-virtual #48-Ubuntu

> Call trace is panic <- init_post <- kernel_init <-
> kernel_thread_helper <- kernel_init <- kernel_thread_helper

Outer VM uses kernel from kvm's git repository and latest libvirt 
release (0.9.4) on Ubuntu 10.04. The inner VM uses Ubuntu's standard 
kernel + kvm.


I'm using kvm_intel (arch amd64), loaded kvm_intel with nested=1 and 
created a custom emulator script, which passes "-enable-nesting -cpu 
host" to the kvm command. I also tried "-cpu qemu64,+vmx".


Do you have a clue, why /init cannot be executed? I extracted the initrd 
image of inner VM and it looks okay. /init is executable and a readable 
shell script.


Inner and outer VMs have been built with ubuntu-vm-builder.

ubuntu-vm-builder kvm natty --domain innervm3 --hostname innervm3 --dest 
innervm3 --mem 128 --addpkg acpid --addpkg openssh-server --addpkg avahi-daemon 
--libvirt qemu:///system
The output is exactly the same for both runs (thus no errors are printed 
out).


How can I further debug that? I must admit that I'm not that familar 
with the whole init process.


Thank you very much for your help!

Kind regards
Steffen

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2915201 ] Nested kvm (SVM)

2010-11-26 Thread SourceForge.net
Bugs item #2915201, was opened at 2009-12-16 01:35
Message generated for change (Comment added) made by jessorensen
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: amd
Group: v1.0 (example)
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: jbl001 (jbl001)
Assigned to: Nobody/Anonymous (nobody)
Summary: Nested kvm (SVM)

Initial Comment:
I have seen a couple messages where people have stated that nested SVM works 
properly, but I cannot replicate it. I first attempted to use the following 
configurations:

Hardware:
desktop system: Gigabyte MA785GM board with Athlon X2 4400
server system: Tyan h2000M board with Opteron 2354

Software:
Host OS: Ubuntu 9.10 with production kernel
Host KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also tried 
git-tip
Guest VMM Host OS: Ubuntu 9.10
Guest VMM KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also 
tried git-tip
True guest: tried Slackware 10.2, 64-bit Ubuntu 8.10, 64-bit Ubuntu 9.10, and 
32-bit XP

All configurations result in the true guest not booting, but the Slackware 10.2 
true guest is the easiest to analyze. It hangs at various places during boot 
with the most common being the "calibrating delay loop", "testing HLT 
instruction", mounting the hard disks, or starting the INIT processes. It seems 
it is losing interrupts. 

I also tried an older host (64-bit Ubuntu 8.10) and guest VMM (64-bit Ubuntu 
8.10) with the KVM-88 release. With this configuration, the Slackware 10.2 true 
guest will usually boot, but will then get a constant flow of "hda: lost 
interrupt" and "hda: dma_timer_expiry: dma status == 0x24". Again, it seems to 
be losing interrupts.

I have ensured that the nested=1 is passed to the module and that 
enable-nesting is passed to the qemu. It obviously works for some time and I've 
tried printing out exit reasons in the handle_exit() function of the guest VMM, 
but it consistently fails in some form or another across all the hardware and 
software I have to try it on.


--

>Comment By: Jes Sorensen (jessorensen)
Date: 2010-11-26 13:05

Message:
Did this issue get resolved? Can we close the bug, there hasn't been any
updates for over 9 months?

--

Comment By: Alex Williamson (alex_williamson)
Date: 2010-02-17 20:02

Message:
Try reverting cd3ff653ae0b45bac7a19208e9c75034fcacc85f from kvm-kmod
(kvm-svm).  I ran into trouble with nested kvm about a month ago and
bisected it back to this change.  I alerted Joerg, but he might need
another poke if this fixes nesting for you too.

--

Comment By: jbl001 (jbl001)
Date: 2010-02-17 18:47

Message:
I tried this again with qemu-0.12.2 and kvm-kmod-2.6.32.3 while passing
no-kvmclock to both the host and guest VMM kernels. It did not help the
problem of lost interrupts in the true guest, however.


--

Comment By: Brian Jackson (iggy_cav)
Date: 2010-02-09 20:07

Message:
Can you try disabling kvmclock in both guests?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2915201 ] Nested kvm (SVM)

2010-02-17 Thread SourceForge.net
Bugs item #2915201, was opened at 2009-12-15 17:35
Message generated for change (Comment added) made by alex_williamson
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: amd
Group: v1.0 (example)
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: jbl001 (jbl001)
Assigned to: Nobody/Anonymous (nobody)
Summary: Nested kvm (SVM)

Initial Comment:
I have seen a couple messages where people have stated that nested SVM works 
properly, but I cannot replicate it. I first attempted to use the following 
configurations:

Hardware:
desktop system: Gigabyte MA785GM board with Athlon X2 4400
server system: Tyan h2000M board with Opteron 2354

Software:
Host OS: Ubuntu 9.10 with production kernel
Host KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also tried 
git-tip
Guest VMM Host OS: Ubuntu 9.10
Guest VMM KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also 
tried git-tip
True guest: tried Slackware 10.2, 64-bit Ubuntu 8.10, 64-bit Ubuntu 9.10, and 
32-bit XP

All configurations result in the true guest not booting, but the Slackware 10.2 
true guest is the easiest to analyze. It hangs at various places during boot 
with the most common being the "calibrating delay loop", "testing HLT 
instruction", mounting the hard disks, or starting the INIT processes. It seems 
it is losing interrupts. 

I also tried an older host (64-bit Ubuntu 8.10) and guest VMM (64-bit Ubuntu 
8.10) with the KVM-88 release. With this configuration, the Slackware 10.2 true 
guest will usually boot, but will then get a constant flow of "hda: lost 
interrupt" and "hda: dma_timer_expiry: dma status == 0x24". Again, it seems to 
be losing interrupts.

I have ensured that the nested=1 is passed to the module and that 
enable-nesting is passed to the qemu. It obviously works for some time and I've 
tried printing out exit reasons in the handle_exit() function of the guest VMM, 
but it consistently fails in some form or another across all the hardware and 
software I have to try it on.


--

Comment By: Alex Williamson (alex_williamson)
Date: 2010-02-17 12:02

Message:
Try reverting cd3ff653ae0b45bac7a19208e9c75034fcacc85f from kvm-kmod
(kvm-svm).  I ran into trouble with nested kvm about a month ago and
bisected it back to this change.  I alerted Joerg, but he might need
another poke if this fixes nesting for you too.

--

Comment By: jbl001 (jbl001)
Date: 2010-02-17 10:47

Message:
I tried this again with qemu-0.12.2 and kvm-kmod-2.6.32.3 while passing
no-kvmclock to both the host and guest VMM kernels. It did not help the
problem of lost interrupts in the true guest, however.


--

Comment By: Brian Jackson (iggy_cav)
Date: 2010-02-09 12:07

Message:
Can you try disabling kvmclock in both guests?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2915201 ] Nested kvm (SVM)

2010-02-17 Thread SourceForge.net
Bugs item #2915201, was opened at 2009-12-15 16:35
Message generated for change (Comment added) made by jbl001
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: amd
Group: v1.0 (example)
>Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: jbl001 (jbl001)
Assigned to: Nobody/Anonymous (nobody)
Summary: Nested kvm (SVM)

Initial Comment:
I have seen a couple messages where people have stated that nested SVM works 
properly, but I cannot replicate it. I first attempted to use the following 
configurations:

Hardware:
desktop system: Gigabyte MA785GM board with Athlon X2 4400
server system: Tyan h2000M board with Opteron 2354

Software:
Host OS: Ubuntu 9.10 with production kernel
Host KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also tried 
git-tip
Guest VMM Host OS: Ubuntu 9.10
Guest VMM KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also 
tried git-tip
True guest: tried Slackware 10.2, 64-bit Ubuntu 8.10, 64-bit Ubuntu 9.10, and 
32-bit XP

All configurations result in the true guest not booting, but the Slackware 10.2 
true guest is the easiest to analyze. It hangs at various places during boot 
with the most common being the "calibrating delay loop", "testing HLT 
instruction", mounting the hard disks, or starting the INIT processes. It seems 
it is losing interrupts. 

I also tried an older host (64-bit Ubuntu 8.10) and guest VMM (64-bit Ubuntu 
8.10) with the KVM-88 release. With this configuration, the Slackware 10.2 true 
guest will usually boot, but will then get a constant flow of "hda: lost 
interrupt" and "hda: dma_timer_expiry: dma status == 0x24". Again, it seems to 
be losing interrupts.

I have ensured that the nested=1 is passed to the module and that 
enable-nesting is passed to the qemu. It obviously works for some time and I've 
tried printing out exit reasons in the handle_exit() function of the guest VMM, 
but it consistently fails in some form or another across all the hardware and 
software I have to try it on.


--

>Comment By: jbl001 (jbl001)
Date: 2010-02-17 09:47

Message:
I tried this again with qemu-0.12.2 and kvm-kmod-2.6.32.3 while passing
no-kvmclock to both the host and guest VMM kernels. It did not help the
problem of lost interrupts in the true guest, however.


--

Comment By: Brian Jackson (iggy_cav)
Date: 2010-02-09 11:07

Message:
Can you try disabling kvmclock in both guests?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2915201 ] Nested kvm (SVM)

2010-02-09 Thread SourceForge.net
Bugs item #2915201, was opened at 2009-12-15 18:35
Message generated for change (Comment added) made by iggy_cav
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: amd
Group: v1.0 (example)
>Status: Pending
Resolution: None
Priority: 5
Private: No
Submitted By: jbl001 (jbl001)
Assigned to: Nobody/Anonymous (nobody)
Summary: Nested kvm (SVM)

Initial Comment:
I have seen a couple messages where people have stated that nested SVM works 
properly, but I cannot replicate it. I first attempted to use the following 
configurations:

Hardware:
desktop system: Gigabyte MA785GM board with Athlon X2 4400
server system: Tyan h2000M board with Opteron 2354

Software:
Host OS: Ubuntu 9.10 with production kernel
Host KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also tried 
git-tip
Guest VMM Host OS: Ubuntu 9.10
Guest VMM KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also 
tried git-tip
True guest: tried Slackware 10.2, 64-bit Ubuntu 8.10, 64-bit Ubuntu 9.10, and 
32-bit XP

All configurations result in the true guest not booting, but the Slackware 10.2 
true guest is the easiest to analyze. It hangs at various places during boot 
with the most common being the "calibrating delay loop", "testing HLT 
instruction", mounting the hard disks, or starting the INIT processes. It seems 
it is losing interrupts. 

I also tried an older host (64-bit Ubuntu 8.10) and guest VMM (64-bit Ubuntu 
8.10) with the KVM-88 release. With this configuration, the Slackware 10.2 true 
guest will usually boot, but will then get a constant flow of "hda: lost 
interrupt" and "hda: dma_timer_expiry: dma status == 0x24". Again, it seems to 
be losing interrupts.

I have ensured that the nested=1 is passed to the module and that 
enable-nesting is passed to the qemu. It obviously works for some time and I've 
tried printing out exit reasons in the handle_exit() function of the guest VMM, 
but it consistently fails in some form or another across all the hardware and 
software I have to try it on.


--

>Comment By: Brian Jackson (iggy_cav)
Date: 2010-02-09 13:07

Message:
Can you try disabling kvmclock in both guests?

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2915201 ] Nested kvm (SVM)

2009-12-15 Thread SourceForge.net
Bugs item #2915201, was opened at 2009-12-15 16:35
Message generated for change (Tracker Item Submitted) made by jbl001
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: amd
Group: v1.0 (example)
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: jbl001 (jbl001)
Assigned to: Nobody/Anonymous (nobody)
Summary: Nested kvm (SVM)

Initial Comment:
I have seen a couple messages where people have stated that nested SVM works 
properly, but I cannot replicate it. I first attempted to use the following 
configurations:

Hardware:
desktop system: Gigabyte MA785GM board with Athlon X2 4400
server system: Tyan h2000M board with Opteron 2354

Software:
Host OS: Ubuntu 9.10 with production kernel
Host KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also tried 
git-tip
Guest VMM Host OS: Ubuntu 9.10
Guest VMM KVM: tried kmod 2.6.32 with qemu 0.11.1 and qemu 0.12.0rc2, also 
tried git-tip
True guest: tried Slackware 10.2, 64-bit Ubuntu 8.10, 64-bit Ubuntu 9.10, and 
32-bit XP

All configurations result in the true guest not booting, but the Slackware 10.2 
true guest is the easiest to analyze. It hangs at various places during boot 
with the most common being the "calibrating delay loop", "testing HLT 
instruction", mounting the hard disks, or starting the INIT processes. It seems 
it is losing interrupts. 

I also tried an older host (64-bit Ubuntu 8.10) and guest VMM (64-bit Ubuntu 
8.10) with the KVM-88 release. With this configuration, the Slackware 10.2 true 
guest will usually boot, but will then get a constant flow of "hda: lost 
interrupt" and "hda: dma_timer_expiry: dma status == 0x24". Again, it seems to 
be losing interrupts.

I have ensured that the nested=1 is passed to the module and that 
enable-nesting is passed to the qemu. It obviously works for some time and I've 
tried printing out exit reasons in the handle_exit() function of the guest VMM, 
but it consistently fails in some form or another across all the hardware and 
software I have to try it on.


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2915201&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM on AMD (proxmox in proxmox)

2009-11-29 Thread Alexander Graf

On 27.11.2009, at 17:30, Adrian Terranova wrote:

> On Fri, Nov 27, 2009 at 11:13 AM, Alexander Graf  wrote:
>> 
>> On 27.11.2009, at 17:01, Adrian Terranova wrote:
>> 
>>> On Thu, Nov 26, 2009 at 12:55 PM, Alexander Graf  wrote:
 
 On 26.11.2009, at 17:06, Adrian Terranova wrote:
 
> Hello,
> 
> Looking for a pointer to a working setup of kvm nesting kvm with svm
> extensions working thruout.
> 
> I'm working with proxmox - and trying to get a proxmox in a proxmox
> working.  KVM is called as follows from the proxmox host.
> 
> 31515 ?Sl27:15 /usr/bin/kvm -monitor
> unix:/var/run/qemu-server/109.mon,server,nowait -vnc
> unix:/var/run/qemu-server/109.vnc,password -pidfile
> /var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
> proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
> file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
> -drive 
> file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
> -m 512 -net 
> tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
> -net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
> -cpuunits 1000 -enable-nesting
> 
> The key thing (it appears - is the enable nesting) - the other piece
> that it looks like it needs is a kernel argument to properly enable
> the kvm extensions cause there is no
> 
> /dev/kvm
> 
> but there is an error in dmesg from the dmesg output / boot console of
> the virtualized kvm instance of the following
> 
> [snip from dmesg of first boot]
> ...
> tun: (C) 1999-2004 Max Krasnyansky 
> general protection fault:  [1] PREEMPT SMP
> CPU: 0
> Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun 
> vzdquota
> vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS 
> iptable_mangle iptab
> le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables 
> ipv6 ib_is
> er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi 
> scsi_tran
> sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse 
> pcspkr serio
> _raw e1000 joydev evdev thermal button processor sg scsi_wait_scan 
> virtio_blk dm
> _mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic 
> ide_cd c
> drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd 
> usbcore i2
> c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
> Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
> RIP: 0010:[] [] 
> :kvm_amd:svm_hardware_enabl
> e+0x80/0xe0
> RSP: 0018:81001dcb5de8 EFLAGS: 00010006
> RAX: 1d01 RBX: 0010 RCX: c080
> RDX:  RSI: 88458b26 RDI: 
> RBP: 81001d49b240 R08: 0001 R09: 
> R10:  R11: 88453230 R12: 88420050
> R13: 8845c100 R14: 8845c100 R15: c21f8618
> FS: 7fe49ff576e0() GS:80628000() 
> knlGS:
> CS: 0010 DS:  ES:  CR0: 8005003b
> ...
> 
> More can be found here if you feel really interested
> 
> http://www.proxmox.com/forum/showthread.php?t=2675
> 
> trying to figure out what I missed.
 
 You need to modprobe kvm-amd with the "nested=1" parameter on the host.
 
 Alex
>>> Did that - and get ht following in the guest
>>> 
>>> [snip]
>>> more dmesg output ...
>>> 
>>> kvm: Nested Virtualization enabled
>>> general protection fault:  [1] PREEMPT SMP
>> 
>> You should get "Nested Virtualization enabled" on the host and the GPF 
>> inside the guest.
>> 
>> The fact that you get the GPF tells me that kvm blocked the hardware_enable 
>> which is setting a bit in EFER. That's exactly what the enable_nested=1 
>> parameter is supposed to allow.
>> 
>> I don't really know Proxmox or what version of KVM they use. Could you 
>> please try something reasonably recent?
>> 
>> Alex
> 
> Alex,
> 
> It works - I was being stupid and setting it in the guest -not the
> host - this is what I get now (it just works)

Yep, the guest doesn't need any modifications for this to work. So in fact you 
can even run Xen HVM inside KVM. Hyper-V still breaks, but in theory getting 
that working is the goal :-).

Btw - I'd recommend using nested SVM only with nested paging capable machines. 
Doing shadow paging on the host and the guest is unbearably slow.

Alex--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM on AMD (proxmox in proxmox)

2009-11-27 Thread Adrian Terranova
On Fri, Nov 27, 2009 at 11:13 AM, Alexander Graf  wrote:
>
> On 27.11.2009, at 17:01, Adrian Terranova wrote:
>
>> On Thu, Nov 26, 2009 at 12:55 PM, Alexander Graf  wrote:
>>>
>>> On 26.11.2009, at 17:06, Adrian Terranova wrote:
>>>
 Hello,

 Looking for a pointer to a working setup of kvm nesting kvm with svm
 extensions working thruout.

 I'm working with proxmox - and trying to get a proxmox in a proxmox
 working.  KVM is called as follows from the proxmox host.

 31515 ?        Sl    27:15 /usr/bin/kvm -monitor
 unix:/var/run/qemu-server/109.mon,server,nowait -vnc
 unix:/var/run/qemu-server/109.vnc,password -pidfile
 /var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
 proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
 file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
 -drive 
 file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
 -m 512 -net 
 tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
 -net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
 -cpuunits 1000 -enable-nesting

 The key thing (it appears - is the enable nesting) - the other piece
 that it looks like it needs is a kernel argument to properly enable
 the kvm extensions cause there is no

 /dev/kvm

 but there is an error in dmesg from the dmesg output / boot console of
 the virtualized kvm instance of the following

 [snip from dmesg of first boot]
 ...
 tun: (C) 1999-2004 Max Krasnyansky 
 general protection fault:  [1] PREEMPT SMP
 CPU: 0
 Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun 
 vzdquota
 vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle 
 iptab
 le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ipv6 
 ib_is
 er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi 
 scsi_tran
 sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse pcspkr 
 serio
 _raw e1000 joydev evdev thermal button processor sg scsi_wait_scan 
 virtio_blk dm
 _mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic 
 ide_cd c
 drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd 
 usbcore i2
 c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
 Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
 RIP: 0010:[] [] 
 :kvm_amd:svm_hardware_enabl
 e+0x80/0xe0
 RSP: 0018:81001dcb5de8 EFLAGS: 00010006
 RAX: 1d01 RBX: 0010 RCX: c080
 RDX:  RSI: 88458b26 RDI: 
 RBP: 81001d49b240 R08: 0001 R09: 
 R10:  R11: 88453230 R12: 88420050
 R13: 8845c100 R14: 8845c100 R15: c21f8618
 FS: 7fe49ff576e0() GS:80628000() knlGS:
 CS: 0010 DS:  ES:  CR0: 8005003b
 ...

 More can be found here if you feel really interested

 http://www.proxmox.com/forum/showthread.php?t=2675

 trying to figure out what I missed.
>>>
>>> You need to modprobe kvm-amd with the "nested=1" parameter on the host.
>>>
>>> Alex
>> Did that - and get ht following in the guest
>>
>> [snip]
>> more dmesg output ...
>>
>> kvm: Nested Virtualization enabled
>> general protection fault:  [1] PREEMPT SMP
>
> You should get "Nested Virtualization enabled" on the host and the GPF inside 
> the guest.
>
> The fact that you get the GPF tells me that kvm blocked the hardware_enable 
> which is setting a bit in EFER. That's exactly what the enable_nested=1 
> parameter is supposed to allow.
>
> I don't really know Proxmox or what version of KVM they use. Could you please 
> try something reasonably recent?
>
> Alex

Alex,

It works - I was being stupid and setting it in the guest -not the
host - this is what I get now (it just works)

on proxmox host -

/etc/modules
kvm-amd nested=1

on proxmox host
(kvm / qemu args: -enable-nesting)

then on the guest

dmesg | grep kvm

and the GPF is gone.

Thank you very much - appreciate the feedback. (If only I could READ.)

--Adrian
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM on AMD (proxmox in proxmox)

2009-11-27 Thread Alexander Graf

On 27.11.2009, at 17:01, Adrian Terranova wrote:

> On Thu, Nov 26, 2009 at 12:55 PM, Alexander Graf  wrote:
>> 
>> On 26.11.2009, at 17:06, Adrian Terranova wrote:
>> 
>>> Hello,
>>> 
>>> Looking for a pointer to a working setup of kvm nesting kvm with svm
>>> extensions working thruout.
>>> 
>>> I'm working with proxmox - and trying to get a proxmox in a proxmox
>>> working.  KVM is called as follows from the proxmox host.
>>> 
>>> 31515 ?Sl27:15 /usr/bin/kvm -monitor
>>> unix:/var/run/qemu-server/109.mon,server,nowait -vnc
>>> unix:/var/run/qemu-server/109.vnc,password -pidfile
>>> /var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
>>> proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
>>> file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
>>> -drive 
>>> file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
>>> -m 512 -net 
>>> tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
>>> -net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
>>> -cpuunits 1000 -enable-nesting
>>> 
>>> The key thing (it appears - is the enable nesting) - the other piece
>>> that it looks like it needs is a kernel argument to properly enable
>>> the kvm extensions cause there is no
>>> 
>>> /dev/kvm
>>> 
>>> but there is an error in dmesg from the dmesg output / boot console of
>>> the virtualized kvm instance of the following
>>> 
>>> [snip from dmesg of first boot]
>>> ...
>>> tun: (C) 1999-2004 Max Krasnyansky 
>>> general protection fault:  [1] PREEMPT SMP
>>> CPU: 0
>>> Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun 
>>> vzdquota
>>> vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle 
>>> iptab
>>> le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ipv6 
>>> ib_is
>>> er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi 
>>> scsi_tran
>>> sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse pcspkr 
>>> serio
>>> _raw e1000 joydev evdev thermal button processor sg scsi_wait_scan 
>>> virtio_blk dm
>>> _mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic 
>>> ide_cd c
>>> drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd 
>>> usbcore i2
>>> c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
>>> Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
>>> RIP: 0010:[] [] 
>>> :kvm_amd:svm_hardware_enabl
>>> e+0x80/0xe0
>>> RSP: 0018:81001dcb5de8 EFLAGS: 00010006
>>> RAX: 1d01 RBX: 0010 RCX: c080
>>> RDX:  RSI: 88458b26 RDI: 
>>> RBP: 81001d49b240 R08: 0001 R09: 
>>> R10:  R11: 88453230 R12: 88420050
>>> R13: 8845c100 R14: 8845c100 R15: c21f8618
>>> FS: 7fe49ff576e0() GS:80628000() knlGS:
>>> CS: 0010 DS:  ES:  CR0: 8005003b
>>> ...
>>> 
>>> More can be found here if you feel really interested
>>> 
>>> http://www.proxmox.com/forum/showthread.php?t=2675
>>> 
>>> trying to figure out what I missed.
>> 
>> You need to modprobe kvm-amd with the "nested=1" parameter on the host.
>> 
>> Alex
> Did that - and get ht following in the guest
> 
> [snip]
> more dmesg output ...
> 
> kvm: Nested Virtualization enabled
> general protection fault:  [1] PREEMPT SMP

You should get "Nested Virtualization enabled" on the host and the GPF inside 
the guest.

The fact that you get the GPF tells me that kvm blocked the hardware_enable 
which is setting a bit in EFER. That's exactly what the enable_nested=1 
parameter is supposed to allow.

I don't really know Proxmox or what version of KVM they use. Could you please 
try something reasonably recent?

Alex--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nested KVM on AMD (proxmox in proxmox)

2009-11-27 Thread Adrian Terranova
On Thu, Nov 26, 2009 at 12:55 PM, Alexander Graf  wrote:
>
> On 26.11.2009, at 17:06, Adrian Terranova wrote:
>
>> Hello,
>>
>> Looking for a pointer to a working setup of kvm nesting kvm with svm
>> extensions working thruout.
>>
>> I'm working with proxmox - and trying to get a proxmox in a proxmox
>> working.  KVM is called as follows from the proxmox host.
>>
>> 31515 ?        Sl    27:15 /usr/bin/kvm -monitor
>> unix:/var/run/qemu-server/109.mon,server,nowait -vnc
>> unix:/var/run/qemu-server/109.vnc,password -pidfile
>> /var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
>> proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
>> file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
>> -drive 
>> file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
>> -m 512 -net 
>> tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
>> -net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
>> -cpuunits 1000 -enable-nesting
>>
>> The key thing (it appears - is the enable nesting) - the other piece
>> that it looks like it needs is a kernel argument to properly enable
>> the kvm extensions cause there is no
>>
>> /dev/kvm
>>
>> but there is an error in dmesg from the dmesg output / boot console of
>> the virtualized kvm instance of the following
>>
>> [snip from dmesg of first boot]
>> ...
>> tun: (C) 1999-2004 Max Krasnyansky 
>> general protection fault:  [1] PREEMPT SMP
>> CPU: 0
>> Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun 
>> vzdquota
>> vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle 
>> iptab
>> le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ipv6 
>> ib_is
>> er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi 
>> scsi_tran
>> sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse pcspkr 
>> serio
>> _raw e1000 joydev evdev thermal button processor sg scsi_wait_scan 
>> virtio_blk dm
>> _mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic 
>> ide_cd c
>> drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd 
>> usbcore i2
>> c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
>> Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
>> RIP: 0010:[] [] 
>> :kvm_amd:svm_hardware_enabl
>> e+0x80/0xe0
>> RSP: 0018:81001dcb5de8 EFLAGS: 00010006
>> RAX: 1d01 RBX: 0010 RCX: c080
>> RDX:  RSI: 88458b26 RDI: 
>> RBP: 81001d49b240 R08: 0001 R09: 
>> R10:  R11: 88453230 R12: 88420050
>> R13: 8845c100 R14: 8845c100 R15: c21f8618
>> FS: 7fe49ff576e0() GS:80628000() knlGS:
>> CS: 0010 DS:  ES:  CR0: 8005003b
>> ...
>>
>> More can be found here if you feel really interested
>>
>> http://www.proxmox.com/forum/showthread.php?t=2675
>>
>> trying to figure out what I missed.
>
> You need to modprobe kvm-amd with the "nested=1" parameter on the host.
>
> Alex
I wasn't sure from reading prior posts if maybe I need a specific CPU argument.

This is the host CPU /proc/cpuinfo
vhost01:~# cat /proc/cpuinfo
processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 107
model name  : AMD Athlon(tm) 64 X2 Dual Core Processor 5000+
stepping: 2
cpu MHz : 2599.994
cache size  : 512 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm
extapic cr8_legacy 3dnowprefetch
bogomips: 5205.44
TLB size: 1024 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc 100mhzsteps

processor   : 1
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 107
model name  : AMD Athlon(tm) 64 X2 Dual Core Processor 5000+
stepping: 2
cpu MHz : 2599.994
cache size  : 512 KB
physical id : 0
siblings: 2
core id : 1
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm
extapic cr8_legacy 3dnowprefetch
bogomips: 5199.97
TLB size: 1024 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc

Re: nested KVM on AMD (proxmox in proxmox)

2009-11-27 Thread Adrian Terranova
On Thu, Nov 26, 2009 at 12:55 PM, Alexander Graf  wrote:
>
> On 26.11.2009, at 17:06, Adrian Terranova wrote:
>
>> Hello,
>>
>> Looking for a pointer to a working setup of kvm nesting kvm with svm
>> extensions working thruout.
>>
>> I'm working with proxmox - and trying to get a proxmox in a proxmox
>> working.  KVM is called as follows from the proxmox host.
>>
>> 31515 ?        Sl    27:15 /usr/bin/kvm -monitor
>> unix:/var/run/qemu-server/109.mon,server,nowait -vnc
>> unix:/var/run/qemu-server/109.vnc,password -pidfile
>> /var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
>> proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
>> file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
>> -drive 
>> file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
>> -m 512 -net 
>> tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
>> -net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
>> -cpuunits 1000 -enable-nesting
>>
>> The key thing (it appears - is the enable nesting) - the other piece
>> that it looks like it needs is a kernel argument to properly enable
>> the kvm extensions cause there is no
>>
>> /dev/kvm
>>
>> but there is an error in dmesg from the dmesg output / boot console of
>> the virtualized kvm instance of the following
>>
>> [snip from dmesg of first boot]
>> ...
>> tun: (C) 1999-2004 Max Krasnyansky 
>> general protection fault:  [1] PREEMPT SMP
>> CPU: 0
>> Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun 
>> vzdquota
>> vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle 
>> iptab
>> le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ipv6 
>> ib_is
>> er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi 
>> scsi_tran
>> sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse pcspkr 
>> serio
>> _raw e1000 joydev evdev thermal button processor sg scsi_wait_scan 
>> virtio_blk dm
>> _mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic 
>> ide_cd c
>> drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd 
>> usbcore i2
>> c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
>> Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
>> RIP: 0010:[] [] 
>> :kvm_amd:svm_hardware_enabl
>> e+0x80/0xe0
>> RSP: 0018:81001dcb5de8 EFLAGS: 00010006
>> RAX: 1d01 RBX: 0010 RCX: c080
>> RDX:  RSI: 88458b26 RDI: 
>> RBP: 81001d49b240 R08: 0001 R09: 
>> R10:  R11: 88453230 R12: 88420050
>> R13: 8845c100 R14: 8845c100 R15: c21f8618
>> FS: 7fe49ff576e0() GS:80628000() knlGS:
>> CS: 0010 DS:  ES:  CR0: 8005003b
>> ...
>>
>> More can be found here if you feel really interested
>>
>> http://www.proxmox.com/forum/showthread.php?t=2675
>>
>> trying to figure out what I missed.
>
> You need to modprobe kvm-amd with the "nested=1" parameter on the host.
>
> Alex
Did that - and get ht following in the guest

[snip]
more dmesg output ...

kvm: Nested Virtualization enabled
general protection fault:  [1] PREEMPT SMP
CPU: 0
Modules linked in: kvm_amd kvm virtio_balloon parport_pc parport
floppy psmouse pcspkr serio_raw e1000 joydev evdev button thermal
processor sg scsi_wait_scan virtio_blk dm_mod usbhid hid usb_storage
libusual sd_mod sr_mod ide_disk ide_generic ide_cd cdrom ide_core
ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd usbcore
i2c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
Pid: 2271, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
RIP: 0010:[]  []
:kvm_amd:svm_hardware_enable+0x80/0xe0
RSP: 0018:81001ac31de8  EFLAGS: 00010006
RAX: 1d01 RBX: 0010 RCX: c080
RDX:  RSI: 8829eb26 RDI: 
RBP: 810019126180 R08: 0001 R09: 
R10:  R11: 88299230 R12: 88266050
R13: 882a2100 R14: 882a2100 R15: c21f2618
FS:  7f5f86ed96e0() GS:80628000() knlGS:
CS:  0010 DS:  ES:  CR0: 8005003b
CR2: 7f80e22f8098 CR3: 1f5be000 CR4: 06e0
DR0:  DR1:  DR2: 
DR3:  DR6: 0ff0 DR7: 0400
Process modprobe (pid: 2271, veid=0, threadinfo 81001ac3, task
81001e1431c0)
Stack:  80692080  81001ac31e4c 
  80247cf6 0040 81001ac31e4c
 2880 882663f7 0001 882a2100
Call Trace:
 [] on_each_cpu+0x36/0x80
 [] :kvm:kvm_init+0x187/0x2d0
 [] sys_init_module+0x192/0x1af0
 [] alloc_pages_current+0x0/0x160
 [] system

Re: nested KVM on AMD (proxmox in proxmox)

2009-11-26 Thread Alexander Graf

On 26.11.2009, at 17:06, Adrian Terranova wrote:

> Hello,
> 
> Looking for a pointer to a working setup of kvm nesting kvm with svm
> extensions working thruout.
> 
> I'm working with proxmox - and trying to get a proxmox in a proxmox
> working.  KVM is called as follows from the proxmox host.
> 
> 31515 ?Sl27:15 /usr/bin/kvm -monitor
> unix:/var/run/qemu-server/109.mon,server,nowait -vnc
> unix:/var/run/qemu-server/109.vnc,password -pidfile
> /var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
> proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
> file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
> -drive 
> file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
> -m 512 -net 
> tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
> -net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
> -cpuunits 1000 -enable-nesting
> 
> The key thing (it appears - is the enable nesting) - the other piece
> that it looks like it needs is a kernel argument to properly enable
> the kvm extensions cause there is no
> 
> /dev/kvm
> 
> but there is an error in dmesg from the dmesg output / boot console of
> the virtualized kvm instance of the following
> 
> [snip from dmesg of first boot]
> ...
> tun: (C) 1999-2004 Max Krasnyansky 
> general protection fault:  [1] PREEMPT SMP
> CPU: 0
> Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun 
> vzdquota
> vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle 
> iptab
> le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ipv6 
> ib_is
> er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi 
> scsi_tran
> sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse pcspkr 
> serio
> _raw e1000 joydev evdev thermal button processor sg scsi_wait_scan virtio_blk 
> dm
> _mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic 
> ide_cd c
> drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd usbcore 
> i2
> c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
> Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
> RIP: 0010:[] [] 
> :kvm_amd:svm_hardware_enabl
> e+0x80/0xe0
> RSP: 0018:81001dcb5de8 EFLAGS: 00010006
> RAX: 1d01 RBX: 0010 RCX: c080
> RDX:  RSI: 88458b26 RDI: 
> RBP: 81001d49b240 R08: 0001 R09: 
> R10:  R11: 88453230 R12: 88420050
> R13: 8845c100 R14: 8845c100 R15: c21f8618
> FS: 7fe49ff576e0() GS:80628000() knlGS:
> CS: 0010 DS:  ES:  CR0: 8005003b
> ...
> 
> More can be found here if you feel really interested
> 
> http://www.proxmox.com/forum/showthread.php?t=2675
> 
> trying to figure out what I missed.

You need to modprobe kvm-amd with the "nested=1" parameter on the host.

Alex--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


nested KVM on AMD (proxmox in proxmox)

2009-11-26 Thread Adrian Terranova
Hello,

Looking for a pointer to a working setup of kvm nesting kvm with svm
extensions working thruout.

I'm working with proxmox - and trying to get a proxmox in a proxmox
working.  KVM is called as follows from the proxmox host.

31515 ?Sl27:15 /usr/bin/kvm -monitor
unix:/var/run/qemu-server/109.mon,server,nowait -vnc
unix:/var/run/qemu-server/109.vnc,password -pidfile
/var/run/qemu-server/109.pid -daemonize -usbdevice tablet -name
proxmoxkvmtest -smp sockets=1,cores=1 -vga cirrus -tdf -k en-us -drive
file=/mnt/pve/nfsimages/images/109/vm-109-disk-1.raw,if=ide,index=0,boot=on
-drive 
file=/var/lib/vz/template/iso/proxmox-ve_1.4-4390.iso,if=ide,index=2,media=cdrom
-m 512 -net tap,vlan=0,ifname=vmtab109i0,script=/var/lib/qemu-server/bridge-vlan
-net nic,vlan=0,model=e1000,macaddr=A2:40:B2:EF:69:B8 -id 109
-cpuunits 1000 -enable-nesting

The key thing (it appears - is the enable nesting) - the other piece
that it looks like it needs is a kernel argument to properly enable
the kvm extensions cause there is no

/dev/kvm

but there is an error in dmesg from the dmesg output / boot console of
the virtualized kvm instance of the following

[snip from dmesg of first boot]
...
tun: (C) 1999-2004 Max Krasnyansky 
general protection fault:  [1] PREEMPT SMP
CPU: 0
Modules linked in: kvm_amd kvm vzethdev vznetdev simfs vzrst vzcpt tun vzdquota
vzmon vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle iptab
le_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ipv6 ib_is
er rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi scsi_tran
sport_iscsi bridge virtio_balloon parport_pc parport floppy psmouse pcspkr serio
_raw e1000 joydev evdev thermal button processor sg scsi_wait_scan virtio_blk dm
_mod usbhid hid usb_storage libusual sd_mod sr_mod ide_disk ide_generic ide_cd c
drom ide_core ata_piix pata_acpi ata_generic libata scsi_mod uhci_hcd usbcore i2
c_piix4 i2c_core virtio_pci virtio_ring virtio isofs msdos fat
Pid: 2914, comm: modprobe Not tainted 2.6.24-8-pve #1 ovz005
RIP: 0010:[] [] :kvm_amd:svm_hardware_enabl
e+0x80/0xe0
RSP: 0018:81001dcb5de8 EFLAGS: 00010006
RAX: 1d01 RBX: 0010 RCX: c080
RDX:  RSI: 88458b26 RDI: 
RBP: 81001d49b240 R08: 0001 R09: 
R10:  R11: 88453230 R12: 88420050
R13: 8845c100 R14: 8845c100 R15: c21f8618
FS: 7fe49ff576e0() GS:80628000() knlGS:
CS: 0010 DS:  ES:  CR0: 8005003b
...

More can be found here if you feel really interested

http://www.proxmox.com/forum/showthread.php?t=2675

trying to figure out what I missed.


--Adrian
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested kvm -> vmware esx

2009-02-13 Thread Alexander Graf

Does the first or second level guest panic?

Alex

On 13.02.2009, at 18:38, Jeffry Molanus   
wrote:



I have applied the patches you suggested and the system boots. The
system boots *very* slow and panics once it boots. I do not have the
dump with me right now but its something in sched.c

Jeffry

On Thu, Feb 12, 2009 at 6:25 PM, Alexander Graf  wrote:

Jeffry Molanus wrote:

Ps. I did try the -cpu switches ofcourse

Jeffry



Please try to use the -cpu phenom CPU and Nested Paging on an AMD  
CPU.
That's the only configuration I got things working at least a bit  
with.

If that work so far, apply the VMware backdoor patch I posted on the
qemu list, that exposes the TSC speed to ESX.

Using that configuration I was able to run ReactOS in the nested  
guest
pretty well. Linux got stuck somewhere in udev though. Don't try 64- 
bit

Linux yet.

Alex


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested kvm -> vmware esx

2009-02-13 Thread Jeffry Molanus
I have applied the patches you suggested and the system boots. The
system boots *very* slow and panics once it boots. I do not have the
dump with me right now but its something in sched.c

Jeffry

On Thu, Feb 12, 2009 at 6:25 PM, Alexander Graf  wrote:
> Jeffry Molanus wrote:
>> Ps. I did try the -cpu switches ofcourse
>>
>> Jeffry
>
>
> Please try to use the -cpu phenom CPU and Nested Paging on an AMD CPU.
> That's the only configuration I got things working at least a bit with.
> If that work so far, apply the VMware backdoor patch I posted on the
> qemu list, that exposes the TSC speed to ESX.
>
> Using that configuration I was able to run ReactOS in the nested guest
> pretty well. Linux got stuck somewhere in udev though. Don't try 64-bit
> Linux yet.
>
> Alex
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested kvm -> vmware esx

2009-02-12 Thread Alexander Graf
Jeffry Molanus wrote:
> Ps. I did try the -cpu switches ofcourse
>
> Jeffry


Please try to use the -cpu phenom CPU and Nested Paging on an AMD CPU.
That's the only configuration I got things working at least a bit with.
If that work so far, apply the VMware backdoor patch I posted on the
qemu list, that exposes the TSC speed to ESX.

Using that configuration I was able to run ReactOS in the nested guest
pretty well. Linux got stuck somewhere in udev though. Don't try 64-bit
Linux yet.

Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested kvm -> vmware esx

2009-02-12 Thread Jeffry Molanus

Ps. I did try the -cpu switches ofcourse

Jeffry

Jeffry Molanus schreef:

Hi all,

I want to run vmware esxi in a kvm virtual machine, however the CPU is 
not supported by vmware as its an amd qemu cpu. Is there a work around 
for this? Like setting an other cpu ID or something?


Kind regards,

Jeffry



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Nested kvm -> vmware esx

2009-02-12 Thread Jeffry Molanus

Hi all,

I want to run vmware esxi in a kvm virtual machine, however the CPU is 
not supported by vmware as its an amd qemu cpu. Is there a work around 
for this? Like setting an other cpu ID or something?


Kind regards,

Jeffry
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-31 Thread Alexander Graf


On 30.12.2008, at 18:19, Todd Deshane wrote:


Some more information on this one:

I just pulled and installed the latest userspace, Now I get
a kernel panic: not syncing: IO-APIC + timer doesn't work!
Boot with apic=debug and send a report. Then try booting
with the 'noapic' option. (this is during the ubuntu CD boot
with the qemu line as:
qemu-system-x86_64 -hda ubuntu-server.img -cdrom
ubuntu-8.10-server-amd64.isoa


This message comes when less than n interrupts were received within m  
loop cycles. I actually do get that with native kvm too sometimes.  
Just try to boot the VM again.



I will try as the error says, but I think I am going to need
a better console (serial or vnc or something), since the
crash leaves the guest in a non usable state. can't ssh
or anything and it is spinning the CPU over 100%.


Eh - the l1 guest or the l2 (nested) guest? The l1 guest should never  
get to an unusable state.



I'll report back any new information. Any debugging tips
for this type of situation are most welcome.


Hum. I usually do a -monitor stdio for the nested kvm instance, but  
apart from that it's more a question of improvising :).


PS: I won't be around any AMD machine until next week, so I can't  
really see if it works reliably for me.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-30 Thread Todd Deshane
Some more information on this one:

I just pulled and installed the latest userspace, Now I get
a kernel panic: not syncing: IO-APIC + timer doesn't work!
Boot with apic=debug and send a report. Then try booting
with the 'noapic' option. (this is during the ubuntu CD boot
with the qemu line as:
qemu-system-x86_64 -hda ubuntu-server.img -cdrom
ubuntu-8.10-server-amd64.iso

I will try as the error says, but I think I am going to need
a better console (serial or vnc or something), since the
crash leaves the guest in a non usable state. can't ssh
or anything and it is spinning the CPU over 100%.

I'll report back any new information. Any debugging tips
for this type of situation are most welcome.

Thanks,
Todd

-- 
Todd Deshane
http://todddeshane.net
http://runningxen.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-29 Thread Todd Deshane
On Wed, Dec 24, 2008 at 4:20 AM, Alexander Graf  wrote:
>
>
> Ugh. Looks like the emulation part is still broken :-(. Please use the
> attached patch to disable the emulation optimization for now.
>
> Avi, could you please apply that patch for kvm-82 too, so we get something
> working out? I'll take a closer look at what's broken exactly later on.
>
> Alex
>
>

So I am working with the latest git, from today.

The emulation error went away and the nested KVM guest partially works.

The errors that I am seeing late in the normal guest boot (which seem
non-fatal) are:
Dec 29 18:33:31 amdbox kernel: [ 1060.446054] bad partial csum:
csum=5888/5694 len=80
Dec 29 18:33:33 amdbox kernel: [ 1061.934164] bad partial csum:
csum=5888/5694 len=80
Dec 29 18:33:33 amdbox kernel: [ 1062.170127] bad partial csum:
csum=5888/5694 len=60
Dec 29 18:33:34 amdbox kernel: [ 1063.419124] bad partial csum:
csum=5888/5694 len=270
Dec 29 18:33:35 amdbox kernel: [ 1063.667817] bad partial csum:
csum=5888/5694 len=270
Dec 29 18:33:35 amdbox kernel: [ 1063.927839] bad partial csum:
csum=5888/5694 len=270
Dec 29 18:33:35 amdbox kernel: [ 1064.126336] bad partial csum:
csum=5888/5694 len=252
Dec 29 18:33:35 amdbox kernel: [ 1064.274429] bad partial csum:
csum=5888/5694 len=152
Dec 29 18:33:35 amdbox kernel: [ 1064.522702] bad partial csum:
csum=5888/5694 len=152
Dec 29 18:33:36 amdbox kernel: [ 1064.776290] bad partial csum:
csum=5888/5694 len=152
Dec 29 18:33:38 amdbox kernel: [ 1067.309123] __ratelimit: 4 callbacks
suppressed
Dec 29 18:33:38 amdbox kernel: [ 1067.309126] bad partial csum:
csum=5888/5694 len=252
Dec 29 18:33:39 amdbox kernel: [ 1068.160737] bad partial csum:
csum=5888/5694 len=241
Dec 29 18:33:41 amdbox kernel: [ 1070.170049] bad partial csum:
csum=5888/5694 len=60

After that I am able to start the nested guest with:
sudo qemu-system-x86_64 -hda ubuntu-server.img -cdrom
Desktop/ubuntu-8.10-server-amd64.iso

The nested guest also has the latest git checkout

The nested guest shows the Ubuntu install CD welcome and selecting a
language and starting
the boot process starts a very little bit and the screen goes black.

The nested guest doesn't crash, but becomes very unresponsive, can't
ping it, can't ssh, etc.
It seems like it only runs for a short time before it becomes
unresponsive (less than 30
seconds).

I can attach to the qemu-system-x86_64

(gdb) where
#0  0x7fa8cc4a1482 in select () from /lib/libc.so.6
#1  0x00408bcb in main_loop_wait (timeout=0)
at /backup/src/kvm-src/kvm-userspace/qemu/vl.c:3617
#2  0x005160fa in kvm_main_loop ()
at /backup/src/kvm-src/kvm-userspace/qemu/qemu-kvm.c:599
#3  0x0040d106 in main (argc=,
argv=0x7fffd58e9f48, envp=)
at /backup/src/kvm-src/kvm-userspace/qemu/vl.c:3779

After some time, the qemu-system-x86_64 process starts to take
between 97 and 100% of the CPU.

The base system is still running OK, but no new messages are printed
in /var/log/syslog

I am sure there are more KVM debugging tricks

Any suggestions?

Thanks,
Todd

-- 
Todd Deshane
http://todddeshane.net
http://runningxen.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-25 Thread Alexander Graf





On 25.12.2008, at 10:59, Avi Kivity  wrote:


Alexander Graf wrote:


Avi, could you please apply that patch for kvm-82 too, so we get  
something working out? I'll take a closer look at what's broken  
exactly later on.


I'll just revert the emulation loop patch.  We can reapply it once  
we fix the problem.


Sounds good. It was rather meant as a draft/rfc anyways :-).

Alex




--
error compiling committee.c: too many arguments to function


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-25 Thread Avi Kivity

Alexander Graf wrote:


Avi, could you please apply that patch for kvm-82 too, so we get 
something working out? I'll take a closer look at what's broken 
exactly later on.


I'll just revert the emulation loop patch.  We can reapply it once we 
fix the problem.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-24 Thread Alexander Graf


On 24.12.2008, at 05:18, Todd Deshane wrote:

On Tue, Dec 23, 2008 at 12:04 PM, Alexander Graf   
wrote:

Your KVM kernel module does not like that the guest writes into
MSR_VM_HSAVE_PA. This is pretty fundamental and should always work  
if you
build current git kvm kernel modules. Are you sure you're using the  
current

git modules? Are you using the -enable-nesting option for qemu?

Please try to rmmod everything, take a fresh checkout from git,  
compile it
and load the module with insmod kvm-amd.ko nested=1. I can't think  
of any

way this could fail.



OK, so I followed your directions above much more carefully, got the  
latest

checkout and insmod'd kvm, kvm-amd nested=1 and watched carefully
to the syslog (dmesg).

When the kvm_amd module was loaded I get:

kvm: Nested Virtualization enabled

Good sign.

So I booted up a guest with:

sudo qemu-system-x86_64 -enable-nesting -m 512 -drive
file=/dev/storage/deshantm-desktop,if=virtio,boot=on -drive
file=/dev/storage/deshantm-temp-space,if=virtio -usb -usbdevice tablet
-net nic,macaddr=00:16:3e:16:00:00,model=virtio -net
tap,script=/usr/local/share/qemu-ifup -daemonize -vnc :16

I checked /proc/cpuinfo, which showed the svm flag (doesn't show the
svm flag without the -enable-nesting)

So all looks pretty good.


Yep. That looks all pretty good :-). A lot better than before!




During the guest boot, some normal looking messages.
Dec 23 22:42:28 amdbox kernel: [15715.578035] device tap0 entered
promiscuous mode
Dec 23 22:42:28 amdbox kernel: [15715.578059] br0: port 2(tap0)
entering learning state
Dec 23 22:42:29 amdbox avahi-daemon[5457]: Registering new address
record for fe80::f01d:36ff:fe6f:597 on tap0.*.
Dec 23 22:42:37 amdbox kernel: [15724.576010] br0: topology change
detected, propagating
Dec 23 22:42:37 amdbox kernel: [15724.576014] br0: port 2(tap0)
entering forwarding state
Dec 23 22:42:38 amdbox kernel: [15725.185009] tap0: no IPv6 routers  
present



Then, in the guest I run a more simple command:
sudo qemu-system-x86_64 -hda ubuntu-server.img -cdrom install_cd.iso
which produces dmesg in the base as follows:

Dec 23 22:44:05 amdbox kernel: [15812.088706] __ratelimit: 20
callbacks suppressed
Dec 23 22:44:05 amdbox kernel: [15812.088710] emulation failed (mmio)
rip a0370a11 0f 01 da 0f


Ugh. Looks like the emulation part is still broken :-(. Please use the  
attached patch to disable the emulation optimization for now.


Avi, could you please apply that patch for kvm-82 too, so we get  
something working out? I'll take a closer look at what's broken  
exactly later on.


Alex



disable-emulation.patch
Description: Binary data




Re: Nested KVM

2008-12-23 Thread Todd Deshane
On Tue, Dec 23, 2008 at 12:04 PM, Alexander Graf  wrote:
> Your KVM kernel module does not like that the guest writes into
> MSR_VM_HSAVE_PA. This is pretty fundamental and should always work if you
> build current git kvm kernel modules. Are you sure you're using the current
> git modules? Are you using the -enable-nesting option for qemu?
>
> Please try to rmmod everything, take a fresh checkout from git, compile it
> and load the module with insmod kvm-amd.ko nested=1. I can't think of any
> way this could fail.
>

OK, so I followed your directions above much more carefully, got the latest
checkout and insmod'd kvm, kvm-amd nested=1 and watched carefully
to the syslog (dmesg).

When the kvm_amd module was loaded I get:

kvm: Nested Virtualization enabled

Good sign.

So I booted up a guest with:

sudo qemu-system-x86_64 -enable-nesting -m 512 -drive
file=/dev/storage/deshantm-desktop,if=virtio,boot=on -drive
file=/dev/storage/deshantm-temp-space,if=virtio -usb -usbdevice tablet
 -net nic,macaddr=00:16:3e:16:00:00,model=virtio -net
tap,script=/usr/local/share/qemu-ifup -daemonize -vnc :16

I checked /proc/cpuinfo, which showed the svm flag (doesn't show the
svm flag without the -enable-nesting)

So all looks pretty good.

During the guest boot, some normal looking messages.
Dec 23 22:42:28 amdbox kernel: [15715.578035] device tap0 entered
promiscuous mode
Dec 23 22:42:28 amdbox kernel: [15715.578059] br0: port 2(tap0)
entering learning state
Dec 23 22:42:29 amdbox avahi-daemon[5457]: Registering new address
record for fe80::f01d:36ff:fe6f:597 on tap0.*.
Dec 23 22:42:37 amdbox kernel: [15724.576010] br0: topology change
detected, propagating
Dec 23 22:42:37 amdbox kernel: [15724.576014] br0: port 2(tap0)
entering forwarding state
Dec 23 22:42:38 amdbox kernel: [15725.185009] tap0: no IPv6 routers present


Then, in the guest I run a more simple command:
sudo qemu-system-x86_64 -hda ubuntu-server.img -cdrom install_cd.iso
which produces dmesg in the base as follows:

Dec 23 22:44:05 amdbox kernel: [15812.088706] __ratelimit: 20
callbacks suppressed
Dec 23 22:44:05 amdbox kernel: [15812.088710] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.088798] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.088865] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.088917] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.088977] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.089018] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.089069] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.089110] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.089151] emulation failed (mmio)
rip a0370a11 0f 01 da 0f
Dec 23 22:44:05 amdbox kernel: [15812.089190] emulation failed (mmio)
rip a0370a11 0f 01 da 0f

This gives a lockup of the guest.

I also then tried a simpler guest command:
sudo qemu-system-x86_64 -enable-nesting -m 512 -drive
file=/dev/storage/deshantm-desktop,if=virtio,boot=on
Which also produces the same syslog messages and locks up the guest
when it tries to start kvm.
(I tried to start the nested kvm with onlly a cdrom as well).

git-log for the kernel shows:

commit 7b8052aecd9c533661493d1140cbec0e1ab311d3
Author: Alexander Graf 
Date:   Thu Dec 18 13:30:57 2008 +0100

KVM: SVM: don't run into endless loop in nested svm

With the emulation optimization after clgi, we can potentially
run into an endless loop thanks to while(true).

While this should never occur in practise, except for when
the emulation is broken or really awkward code is executed in
the VM, this wasn't a problem so far.

Signed-off-by: Alexander Graf 
Signed-off-by: Avi Kivity 

commit e72dcf1240f59174ff7c18bd461021a00ed3e38c
Author: Avi Kivity 
Date:   Tue Dec 23 19:46:01 2008 +0200

git-log on kvm-userspace shows
commit cd5b58d8a2fbd134b09f0be1de33773f162b79d4
Merge: a0b5207... f9cac6f...
Author: Avi Kivity 
Date:   Tue Dec 23 18:52:56 2008 +0200

Merge branch 'qemu-cvs'

Conflicts:
qemu/Makefile
qemu/Makefile.target
qemu/configure
qemu/hw/pc.c
qemu/hw/pc.h
qemu/hw/pci.c
qemu/hw/virtio-net.c
qemu/net.c
qemu/net.h
qemu/pc-bios/bios.bin
qemu/pc-bios/vgabios-cirrus.bin
qemu/pc-bios/vgabios.bin
qemu/target-ppc/helper.c
qemu/vl.c

Let me know if there is any other information I can provide to
help troubleshoot.

Thanks,
Todd

-- 
Todd Deshane
http://todddeshane.net
http://runningxen.com
--
To unsubscribe from this list: send the line "unsubscribe kvm&qu

Re: Nested KVM

2008-12-23 Thread Alexander Graf


On 23.12.2008, at 18:05, Avi Kivity wrote:


Alexander Graf wrote:



I have successfully built the latest kvm, kvm-userspace from git.

I loaded the kvm_amd module with the nested=1 option.

The kvm guest that I am working with has an older version
of kvm (Ubuntu 8.10 + kvm package) and during boot I get:

[   23.673107] Pid: 4278, comm: modprobe Tainted: G S 
2.6.27-7-generic #1

[   23.673107] RIP: 0010:[]  []
native_write_msr_safe+0xa/0x10
[   23.673107] RSP: 0018:880018dc3dd8  EFLAGS: 00010002
[   23.673107] RAX: 1d01 RBX: 880018dc3e04 RCX:  
c080


Maybe it is just that a newer version of KVM is needed?

I will also try to build a newer version of KVM and test.

Thanks for any comments.


Your guest is writing to an MSR that is unknown to the kvm msr  
emulation. What does dmesg on the host say?




You can actually see that it's writing to EFER (rcx = 0xc0080).   
But that should always be implemented on AMD.  It's probably  
complaining about the svme bit, but that can only happen if nested=0?


Oh, right. But then again dmesg spits out a lot of these:

[51056.178705] kvm: 14970: cpu0 unhandled wrmsr: 0xc0010117 data 0

So writing HSAVE fails too, which should only happen on older KVM  
versions.


But nevertheless, setting EFER only works if nested=1 is given as  
module option and -enable-nesting is used on qemu (read: the SVM CPUID  
capability is set).


Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-23 Thread Avi Kivity

Alexander Graf wrote:


Oh, right. But then again dmesg spits out a lot of these:

[51056.178705] kvm: 14970: cpu0 unhandled wrmsr: 0xc0010117 data 0



I think that's actually userspace writing to the msr, not the guest. 
Something's confused.



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-23 Thread Avi Kivity

Alexander Graf wrote:



I have successfully built the latest kvm, kvm-userspace from git.

I loaded the kvm_amd module with the nested=1 option.

The kvm guest that I am working with has an older version
of kvm (Ubuntu 8.10 + kvm package) and during boot I get:

[   23.673107] Pid: 4278, comm: modprobe Tainted: G S
2.6.27-7-generic #1

[   23.673107] RIP: 0010:[]  []
native_write_msr_safe+0xa/0x10
[   23.673107] RSP: 0018:880018dc3dd8  EFLAGS: 00010002
[   23.673107] RAX: 1d01 RBX: 880018dc3e04 RCX: 
c080


Maybe it is just that a newer version of KVM is needed?

I will also try to build a newer version of KVM and test.

Thanks for any comments.


Your guest is writing to an MSR that is unknown to the kvm msr 
emulation. What does dmesg on the host say?




You can actually see that it's writing to EFER (rcx = 0xc0080).  But 
that should always be implemented on AMD.  It's probably complaining 
about the svme bit, but that can only happen if nested=0?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-23 Thread Alexander Graf

[   23.673107] Call Trace:
[   23.673107]  [] svm_hardware_enable+0xd6/0x140
[kvm_amd]
[   23.673107]  [] ? hardware_enable+0x0/0x40  
[kvm]
[   23.673107]  [] kvm_arch_hardware_enable 
+0x13/0x20

[kvm]
[   23.673107]  [] hardware_enable+0x2f/0x40 [kvm]
[   23.673107]  [] on_each_cpu+0x34/0x50
[   23.673107]  [] kvm_init+0x165/0x280 [kvm]
[   23.673107]  [] ? svm_init+0x0/0x23 [kvm_amd]
[   23.673107]  [] svm_init+0x21/0x23 [kvm_amd]
[   23.673107]  [] do_one_initcall+0x41/0x170
[   23.673107]  [] ?
__blocking_notifier_call_chain+0x21/0x90
[   23.673107]  [] sys_init_module+0xb5/0x1f0
[   23.673107]  [] system_call_fastpath+0x16/0x1b
[   23.673107]
[   23.673107]
[   23.673107] Code: 00 55 89 f9 48 89 e5 0f 32 31 c9 89 c7 48 89 d0
89 0e 48 c1 e0 20 89 fa 48 09 d0 c9 c3 0f 1f 40 00 55 89 f9 89 f0 48
89 e5 0f 30 <31> c0 c9 c3 66 90 55 89 f9 48 89 e5 0f 33 89 c1 48  
89 d0

48 c1
[   23.673107] RIP  [] native_write_msr_safe+0xa/ 
0x10

[   23.673107]  RSP 
[   23.673107] ---[ end trace 0dc989f1cf9a296e ]---

Maybe it is just that a newer version of KVM is needed?

I will also try to build a newer version of KVM and test.

Thanks for any comments.


Your guest is writing to an MSR that is unknown to the kvm msr  
emulation.

What does dmesg on the host say?



Attached.


Your KVM kernel module does not like that the guest writes into  
MSR_VM_HSAVE_PA. This is pretty fundamental and should always work if  
you build current git kvm kernel modules. Are you sure you're using  
the current git modules? Are you using the -enable-nesting option for  
qemu?


Please try to rmmod everything, take a fresh checkout from git,  
compile it and load the module with insmod kvm-amd.ko nested=1. I  
can't think of any way this could fail.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-23 Thread Alexander Graf





On 23.12.2008, at 03:55, "Todd Deshane"  wrote:


I have successfully built the latest kvm, kvm-userspace from git.

I loaded the kvm_amd module with the nested=1 option.

The kvm guest that I am working with has an older version
of kvm (Ubuntu 8.10 + kvm package) and during boot I get:

[   23.673107] Modules linked in: kvm_amd(+) kvm ppdev cpufreq_stats
cpufreq_powersave cpufreq_userspace cpufreq_conservative
cpufreq_ondemand freq_table container pci_slot wmi video output sbs
sbshc battery
ipv6 iptable_filter ip_tables x_tables ac lp virtio_balloon
virtio_net serio_raw psmouse pcspkr evdev joydev i2c_piix4 i2c_core
parport_pc parport button ext3 jbd mbcache usbhid hid sr_mod cdrom sg
virtio_b
lk pata_acpi ata_piix ata_generic uhci_hcd virtio_pci virtio_ring
virtio usbcore libata scsi_mod dock thermal processor fan fbcon
tileblit font bitblit softcursor fuse
[   23.673107] Pid: 4278, comm: modprobe Tainted: G S 
2.6.27-7-generic #1

[   23.673107] RIP: 0010:[]  []
native_write_msr_safe+0xa/0x10
[   23.673107] RSP: 0018:880018dc3dd8  EFLAGS: 00010002
[   23.673107] RAX: 1d01 RBX: 880018dc3e04 RCX:  
c080
[   23.673107] RDX:  RSI: 1d01 RDI:  
c080
[   23.673107] RBP: 880018dc3dd8 R08:  R09:  
806e1e20
[   23.673107] R10: 880018dc3dfc R11: 880018dc3df8 R12:  
880018c096c0
[   23.673107] R13: a02b25e0 R14: a02e1c80 R15:  
c080

[   23.673107] FS:  7f2f14f946e0() GS:806e1a80()
knlGS:
[   23.673107] CS:  0010 DS:  ES:  CR0: 8005003b
[   23.673107] CR2: 7f2f14fa8000 CR3: 18d4f000 CR4:  
06e0
[   23.673107] DR0:  DR1:  DR2:  

[   23.673107] DR3:  DR6: 0ff0 DR7:  
0400

[   23.673107] Process modprobe (pid: 4278, threadinfo
880018dc2000, task 8800190f1670)
[   23.673107] Stack:  880018dc3e38 a02db196
88000100b07f 
[   23.673107]   0010 

[   23.673107]   a02b25e0 a02e1c80
7f2f14f9a000
[   23.673107] Call Trace:
[   23.673107]  [] svm_hardware_enable+0xd6/0x140  
[kvm_amd]

[   23.673107]  [] ? hardware_enable+0x0/0x40 [kvm]
[   23.673107]  [] kvm_arch_hardware_enable 
+0x13/0x20 [kvm]

[   23.673107]  [] hardware_enable+0x2f/0x40 [kvm]
[   23.673107]  [] on_each_cpu+0x34/0x50
[   23.673107]  [] kvm_init+0x165/0x280 [kvm]
[   23.673107]  [] ? svm_init+0x0/0x23 [kvm_amd]
[   23.673107]  [] svm_init+0x21/0x23 [kvm_amd]
[   23.673107]  [] do_one_initcall+0x41/0x170
[   23.673107]  [] ? __blocking_notifier_call_chain 
+0x21/0x90

[   23.673107]  [] sys_init_module+0xb5/0x1f0
[   23.673107]  [] system_call_fastpath+0x16/0x1b
[   23.673107]
[   23.673107]
[   23.673107] Code: 00 55 89 f9 48 89 e5 0f 32 31 c9 89 c7 48 89 d0
89 0e 48 c1 e0 20 89 fa 48 09 d0 c9 c3 0f 1f 40 00 55 89 f9 89 f0 48
89 e5 0f 30 <31> c0 c9 c3 66 90 55 89 f9 48 89 e5 0f 33 89 c1 48 89 d0
48 c1
[   23.673107] RIP  [] native_write_msr_safe+0xa/ 
0x10

[   23.673107]  RSP 
[   23.673107] ---[ end trace 0dc989f1cf9a296e ]---

Maybe it is just that a newer version of KVM is needed?

I will also try to build a newer version of KVM and test.

Thanks for any comments.


Your guest is writing to an MSR that is unknown to the kvm msr  
emulation. What does dmesg on the host say?


Alex




Cheers,
Todd

--
Todd Deshane
http://todddeshane.net
http://runningxen.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Nested KVM

2008-12-22 Thread Todd Deshane
On Mon, Dec 22, 2008 at 9:55 PM, Todd Deshane  wrote:
> I have successfully built the latest kvm, kvm-userspace from git.
>
> I loaded the kvm_amd module with the nested=1 option.
>
> The kvm guest that I am working with has an older version
> of kvm (Ubuntu 8.10 + kvm package) and during boot I get:
>
> [   23.673107] Modules linked in: kvm_amd(+) kvm ppdev cpufreq_stats

> [   23.673107] ---[ end trace 0dc989f1cf9a296e ]---
>
> Maybe it is just that a newer version of KVM is needed?
>
> I will also try to build a newer version of KVM and test.

I built the latest KVM from git on the guest, but a
similar looking error comes up. Let me know if
it is useful for me to post it.

What else should I try to get nested kvm working?

Thanks,
Todd

-- 
Todd Deshane
http://todddeshane.net
http://runningxen.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Nested KVM

2008-12-22 Thread Todd Deshane
I have successfully built the latest kvm, kvm-userspace from git.

I loaded the kvm_amd module with the nested=1 option.

The kvm guest that I am working with has an older version
of kvm (Ubuntu 8.10 + kvm package) and during boot I get:

[   23.673107] Modules linked in: kvm_amd(+) kvm ppdev cpufreq_stats
cpufreq_powersave cpufreq_userspace cpufreq_conservative
cpufreq_ondemand freq_table container pci_slot wmi video output sbs
sbshc battery
 ipv6 iptable_filter ip_tables x_tables ac lp virtio_balloon
virtio_net serio_raw psmouse pcspkr evdev joydev i2c_piix4 i2c_core
parport_pc parport button ext3 jbd mbcache usbhid hid sr_mod cdrom sg
virtio_b
lk pata_acpi ata_piix ata_generic uhci_hcd virtio_pci virtio_ring
virtio usbcore libata scsi_mod dock thermal processor fan fbcon
tileblit font bitblit softcursor fuse
[   23.673107] Pid: 4278, comm: modprobe Tainted: G S2.6.27-7-generic #1
[   23.673107] RIP: 0010:[]  []
native_write_msr_safe+0xa/0x10
[   23.673107] RSP: 0018:880018dc3dd8  EFLAGS: 00010002
[   23.673107] RAX: 1d01 RBX: 880018dc3e04 RCX: c080
[   23.673107] RDX:  RSI: 1d01 RDI: c080
[   23.673107] RBP: 880018dc3dd8 R08:  R09: 806e1e20
[   23.673107] R10: 880018dc3dfc R11: 880018dc3df8 R12: 880018c096c0
[   23.673107] R13: a02b25e0 R14: a02e1c80 R15: c080
[   23.673107] FS:  7f2f14f946e0() GS:806e1a80()
knlGS:
[   23.673107] CS:  0010 DS:  ES:  CR0: 8005003b
[   23.673107] CR2: 7f2f14fa8000 CR3: 18d4f000 CR4: 06e0
[   23.673107] DR0:  DR1:  DR2: 
[   23.673107] DR3:  DR6: 0ff0 DR7: 0400
[   23.673107] Process modprobe (pid: 4278, threadinfo
880018dc2000, task 8800190f1670)
[   23.673107] Stack:  880018dc3e38 a02db196
88000100b07f 
[   23.673107]   0010 

[   23.673107]   a02b25e0 a02e1c80
7f2f14f9a000
[   23.673107] Call Trace:
[   23.673107]  [] svm_hardware_enable+0xd6/0x140 [kvm_amd]
[   23.673107]  [] ? hardware_enable+0x0/0x40 [kvm]
[   23.673107]  [] kvm_arch_hardware_enable+0x13/0x20 [kvm]
[   23.673107]  [] hardware_enable+0x2f/0x40 [kvm]
[   23.673107]  [] on_each_cpu+0x34/0x50
[   23.673107]  [] kvm_init+0x165/0x280 [kvm]
[   23.673107]  [] ? svm_init+0x0/0x23 [kvm_amd]
[   23.673107]  [] svm_init+0x21/0x23 [kvm_amd]
[   23.673107]  [] do_one_initcall+0x41/0x170
[   23.673107]  [] ? __blocking_notifier_call_chain+0x21/0x90
[   23.673107]  [] sys_init_module+0xb5/0x1f0
[   23.673107]  [] system_call_fastpath+0x16/0x1b
[   23.673107]
[   23.673107]
[   23.673107] Code: 00 55 89 f9 48 89 e5 0f 32 31 c9 89 c7 48 89 d0
89 0e 48 c1 e0 20 89 fa 48 09 d0 c9 c3 0f 1f 40 00 55 89 f9 89 f0 48
89 e5 0f 30 <31> c0 c9 c3 66 90 55 89 f9 48 89 e5 0f 33 89 c1 48 89 d0
48 c1
[   23.673107] RIP  [] native_write_msr_safe+0xa/0x10
[   23.673107]  RSP 
[   23.673107] ---[ end trace 0dc989f1cf9a296e ]---

Maybe it is just that a newer version of KVM is needed?

I will also try to build a newer version of KVM and test.

Thanks for any comments.

Cheers,
Todd

-- 
Todd Deshane
http://todddeshane.net
http://runningxen.com
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html