Re: [kvm-devel] kvm-48 configure error on opensuse 10.3

2007-10-23 Thread Dor Laor
pravin wrote:
 Hi
   I tried to build kvm-48 on open suse 10.3 but configure is
 giving me error. actually no virtual graphics card is getting selected
 as SDL is giving error.

 -
 Install prefix/usr/local
 BIOS directory/usr/local/share/qemu
 binary directory  /usr/local/bin
 Manual directory  /usr/local/share/man
 ELF interp prefix /usr/gnemul/qemu-%M
 Source path   /home/root/work/kvm-48/qemu
 C compiler/usr/bin/gcc
 Host C compiler   gcc
 make  make
 install   install
 host CPU  x86_64
 host big endian   no
 target list   x86_64-softmmu
 gprof enabled no
 profiler  no
 static build  no
 SDL support   no
 mingw32 support   no
 Adlib support no
 CoreAudio support no
 ALSA support  no
 DSound supportno
 FMOD support  no
 OSS support   yes
 VNC TLS support   yes
 TLS CFLAGS
 TLS LIBS  -lgnutls
 kqemu support no
 kvm support   yes
 Documentation yes
 The error log from compiling the libSDL test is:
 /tmp/qemu-conf-16932-14880-11765.c:1:17: error: SDL.h: No such file or 
 directory
 /tmp/qemu-conf-16932-14880-11765.c: In function 'main':
 /tmp/qemu-conf-16932-14880-11765.c:3: error: 'SDL_INIT_VIDEO'
 undeclared (first use in this function)
 /tmp/qemu-conf-16932-14880-11765.c:3: error: (Each undeclared
 identifier is reported only once
 /tmp/qemu-conf-16932-14880-11765.c:3: error: for each function it appears in.)
 ERROR: QEMU requires SDL or Cocoa for graphical output
 To build QEMU without graphical output configure with --disable-gfx-check
 Note that this will disable all output from the virtual graphics card.

 -

 I am using opensuse 2.6.22.5-31-default kernel. Am i missing any
 required packages? Let me know if you guys need any other info.
   
Seems like you dont have SDL-devel package. Either install it or use the

--disable-gfx-check flag.
Dor.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] Build error

2007-10-23 Thread Zhao, Yunfeng
Didn't see it on kvm tree.
The latest commit is 20 hours ago.



I think Anthony has posted a patch to correct this, the name is [kvm-devel]
[PATCH] Fix external module build.

Laurent

Zhao, Yunfeng a écrit :
 I fails to build the latest tip.
 A .h file is missing.
 error: asm/kvm_para.h: No such file or directory

 -Original Message-
 From: root [mailto:[EMAIL PROTECTED]
 Sent: 2007年10月23日 10:49
 Subject:

 make -C kernel
 make[1]: Entering directory

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] Fix external module build

2007-10-23 Thread Avi Kivity
Anthony Liguori wrote:
 The recent paravirt refactoring broke external modules.  This patch fixes 
 that.

   

Thanks, applied.


-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] Build error

2007-10-23 Thread Avi Kivity
Zhao, Yunfeng wrote:
 Didn't see it on kvm tree.
 The latest commit is 20 hours ago.
   

It's now in.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] call preempt_notifier_init early enough

2007-10-23 Thread Avi Kivity
Jan Kiszka wrote:
 As vmx_create_vcpu already makes use of start/end_special_insn, we need
 to initialise the emulated preempt_notifier earlier. Let's move it to
 kvm_vcpu_init. This should fix an oops I've seen here at least once
 during kvm startup - so far the problem did not show up again.
   

An alternative fix has already been committed.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] kvm: external module: backward compatibility for smp_call_function_mask()

2007-10-23 Thread Avi Kivity
Laurent Vivier wrote:
 Before kernel 2.6.24, smp_call_function_mask() is not defined for architecture
 x86_64 and not for architecture i386.

 This patch defines it in external-module-compat.h to emulate it for older
 kernel, it uses codes from arch/x86/kernel/smp_64.c modified to call 
 smp_call_single_function() (like in previous version of KVM) instead of 
 send_IPI_mask().
   

Applied, thanks.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [BUG] 2.6.23.1 host freezes when running kvm

2007-10-23 Thread Bart Trojanowski
* Avi Kivity [EMAIL PROTECTED] [071022 09:42]:
 I'm not sure that's useful -- very little changed after 2.6.23-rc1 (10 
 patches).
 
 There were 92 kvm patches in 2.6.23, so a bisect should take about a 
 week worst case.

I'll get started tonight.

-Bart

-- 
WebSig: http://www.jukie.net/~bart/sig/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] High vm-exit latencies during kvm boot-up/shutdown

2007-10-23 Thread Avi Kivity
Jan Kiszka wrote:
 Avi,

 [somehow your mails do not get through to my private account, so I'm
 switching]

 Avi Kivity wrote:
   
 Jan Kiszka wrote:
 
 Clarification: I can't precisely tell what code is executed in VM mode,
 as I don't have qemu or that guest instrumented. I just see the kernel
 entering VM mode and leaving it again more than 300 us later. So I
 wonder why this is allowed while some external IRQ is pending.

   
   
 How do you know an external interrupt is pending?
 

 It's the host timer IRQ, programmed to fire in certain intervals (100 us
 here). Test case is some latency measurement tool like tglx's cyclictest
 or similar programs we use in Xenomai.

   
 kvm programs the hardware to exit when an external interrupt arrives.

 

 Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with 
 latest kvm from git hacked into (kvm generally seems to work fine this way):

 ...
 qemu-sys-7543  0...1 13897us : vmcs_write16+0xb/0x20 
 (vmx_save_host_state+0x1a7/0x1c0)
 qemu-sys-7543  0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
 qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
 (vmx_save_host_state+0xa0/0x1c0)
 qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
 (vmx_save_host_state+0xb0/0x1c0)
 qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
 (vmx_save_host_state+0xbf/0x1c0)
 qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
 (vmx_save_host_state+0xcf/0x1c0)
 qemu-sys-7543  0...1 13898us : load_msrs+0xb/0x40 
 (vmx_save_host_state+0xe7/0x1c0)
 qemu-sys-7543  0...1 13898us : kvm_load_guest_fpu+0x8/0x40 
 (kvm_vcpu_ioctl_run+0xbf/0x570)
 qemu-sys-7543  0D..1 13899us : vmx_vcpu_run+0xc/0x110 
 (kvm_vcpu_ioctl_run+0x120/0x570)
 qemu-sys-7543  0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
 qemu-sys-7543  0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
 qemu-sys-7543  0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
 qemu-sys-7543  0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
 qemu-sys-7543  0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
 qemu-sys-7543  0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
 qemu-sys-7543  0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
 qemu-sys-7543  0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
 qemu-sys-7543  0D.h1 14351us : __spin_lock+0xc/0x30 
 (handle_level_irq+0x24/0x120)
 qemu-sys-7543  0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 
 (handle_level_irq+0x37/0x120)
 qemu-sys-7543  0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 
 (mask_and_ack_8259A+0x2a/0x120)
 qemu-sys-7543  0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 
 (mask_and_ack_8259A+0x7a/0x120)
 qemu-sys-7543  0D.h2 14358us : redirect_hardirq+0x8/0x70 
 (handle_level_irq+0x72/0x120)
 qemu-sys-7543  0D.h2 14358us : __spin_unlock+0xb/0x40 
 (handle_level_irq+0x8e/0x120)
 qemu-sys-7543  0D.h1 14358us : handle_IRQ_event+0xe/0x110 
 (handle_level_irq+0x9a/0x120)
 qemu-sys-7543  0D.h1 14359us : timer_interrupt+0xb/0x60 
 (handle_IRQ_event+0x67/0x110)
 qemu-sys-7543  0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 
 (timer_interrupt+0x20/0x60)
 ...

 One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
 and this while cyclictest runs at a period of 100 us!

 I got the same results over Adeos/I-pipe  Xenomai with the function
 tracer there, also pointing to the period while the CPU is in VM mode.

 Anyone any ideas? Greg, I put you on CC as you said you once saw decent
 latencies with your patches. Are there still magic bits missing in
 official kvm?
   

No bits missing as far as I know.  It should just work.

Can you explain some more about the latenct tracer?  How does it work? 
Seeing vmx_vcpu_run() in there confuses me, as it always runs with
interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).

Please post a disassembly of your vmx_vcpu_run so we can interpret the
offsets.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] High vm-exit latencies during kvm boot-up/shutdown

2007-10-23 Thread Gregory Haskins
On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
 Jan Kiszka wrote:
  Avi,
 
  [somehow your mails do not get through to my private account, so I'm
  switching]
 
  Avi Kivity wrote:

  Jan Kiszka wrote:
  
  Clarification: I can't precisely tell what code is executed in VM mode,
  as I don't have qemu or that guest instrumented. I just see the kernel
  entering VM mode and leaving it again more than 300 us later. So I
  wonder why this is allowed while some external IRQ is pending.
 


  How do you know an external interrupt is pending?
  
 
  It's the host timer IRQ, programmed to fire in certain intervals (100 us
  here). Test case is some latency measurement tool like tglx's cyclictest
  or similar programs we use in Xenomai.
 

  kvm programs the hardware to exit when an external interrupt arrives.
 
  
 
  Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with 
  latest kvm from git hacked into (kvm generally seems to work fine this way):
 
  ...
  qemu-sys-7543  0...1 13897us : vmcs_write16+0xb/0x20 
  (vmx_save_host_state+0x1a7/0x1c0)
  qemu-sys-7543  0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
  qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
  (vmx_save_host_state+0xa0/0x1c0)
  qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
  (vmx_save_host_state+0xb0/0x1c0)
  qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
  (vmx_save_host_state+0xbf/0x1c0)
  qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
  (vmx_save_host_state+0xcf/0x1c0)
  qemu-sys-7543  0...1 13898us : load_msrs+0xb/0x40 
  (vmx_save_host_state+0xe7/0x1c0)
  qemu-sys-7543  0...1 13898us : kvm_load_guest_fpu+0x8/0x40 
  (kvm_vcpu_ioctl_run+0xbf/0x570)
  qemu-sys-7543  0D..1 13899us : vmx_vcpu_run+0xc/0x110 
  (kvm_vcpu_ioctl_run+0x120/0x570)
  qemu-sys-7543  0D..1 13899us!: vmcs_writel+0xb/0x30 
  (vmx_vcpu_run+0x22/0x110)
  qemu-sys-7543  0D..1 14344us : vmcs_read32+0xb/0x20 
  (vmx_vcpu_run+0xc7/0x110)
  qemu-sys-7543  0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
  qemu-sys-7543  0D..1 14345us : vmcs_read32+0xb/0x20 
  (vmx_vcpu_run+0xf4/0x110)
  qemu-sys-7543  0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
  qemu-sys-7543  0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
  qemu-sys-7543  0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
  qemu-sys-7543  0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
  qemu-sys-7543  0D.h1 14351us : __spin_lock+0xc/0x30 
  (handle_level_irq+0x24/0x120)
  qemu-sys-7543  0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 
  (handle_level_irq+0x37/0x120)
  qemu-sys-7543  0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 
  (mask_and_ack_8259A+0x2a/0x120)
  qemu-sys-7543  0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 
  (mask_and_ack_8259A+0x7a/0x120)
  qemu-sys-7543  0D.h2 14358us : redirect_hardirq+0x8/0x70 
  (handle_level_irq+0x72/0x120)
  qemu-sys-7543  0D.h2 14358us : __spin_unlock+0xb/0x40 
  (handle_level_irq+0x8e/0x120)
  qemu-sys-7543  0D.h1 14358us : handle_IRQ_event+0xe/0x110 
  (handle_level_irq+0x9a/0x120)
  qemu-sys-7543  0D.h1 14359us : timer_interrupt+0xb/0x60 
  (handle_IRQ_event+0x67/0x110)
  qemu-sys-7543  0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 
  (timer_interrupt+0x20/0x60)
  ...
 
  One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
  and this while cyclictest runs at a period of 100 us!
 
  I got the same results over Adeos/I-pipe  Xenomai with the function
  tracer there, also pointing to the period while the CPU is in VM mode.
 
  Anyone any ideas? Greg, I put you on CC as you said you once saw decent
  latencies with your patches. Are there still magic bits missing in
  official kvm?

 
 No bits missing as far as I know.  It should just work.

That could very well be the case these days.  I know back when I was
looking at it, KVM would not run on VMX + -rt without modification or it
would crash/hang (this was around the time I was working on that
smp_function_call stuff).  And without careful modification it would run
very poorly (with high (300us+ latencies) revealed in cyclictest.

However, I was able to craft the vmx_vcpu_run path so that a VM could
run side-by-side with cyclictest with sub 40us latencies.  In fact,
normally it was sub 30us, but on an occasional run I would get a spike
to ~37us.

Unfortunately I am deep into other non-KVM related -rt issues at the
moment, so I can't work on it any further for a bit.

Regards,
-Greg


signature.asc
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net

Re: [kvm-devel] High vm-exit latencies during kvm boot-up/shutdown

2007-10-23 Thread Jan Kiszka
Gregory Haskins wrote:
 On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
 Jan Kiszka wrote:
 Avi,

 [somehow your mails do not get through to my private account, so I'm
 switching]

 Avi Kivity wrote:
   
 Jan Kiszka wrote:
 
 Clarification: I can't precisely tell what code is executed in VM mode,
 as I don't have qemu or that guest instrumented. I just see the kernel
 entering VM mode and leaving it again more than 300 us later. So I
 wonder why this is allowed while some external IRQ is pending.

   
   
 How do you know an external interrupt is pending?
 
 It's the host timer IRQ, programmed to fire in certain intervals (100 us
 here). Test case is some latency measurement tool like tglx's cyclictest
 or similar programs we use in Xenomai.

   
 kvm programs the hardware to exit when an external interrupt arrives.

 
 Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with 
 latest kvm from git hacked into (kvm generally seems to work fine this way):

 ...
 qemu-sys-7543  0...1 13897us : vmcs_write16+0xb/0x20 
 (vmx_save_host_state+0x1a7/0x1c0)
 qemu-sys-7543  0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
 qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
 (vmx_save_host_state+0xa0/0x1c0)
 qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
 (vmx_save_host_state+0xb0/0x1c0)
 qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
 (vmx_save_host_state+0xbf/0x1c0)
 qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
 (vmx_save_host_state+0xcf/0x1c0)
 qemu-sys-7543  0...1 13898us : load_msrs+0xb/0x40 
 (vmx_save_host_state+0xe7/0x1c0)
 qemu-sys-7543  0...1 13898us : kvm_load_guest_fpu+0x8/0x40 
 (kvm_vcpu_ioctl_run+0xbf/0x570)
 qemu-sys-7543  0D..1 13899us : vmx_vcpu_run+0xc/0x110 
 (kvm_vcpu_ioctl_run+0x120/0x570)
 qemu-sys-7543  0D..1 13899us!: vmcs_writel+0xb/0x30 
 (vmx_vcpu_run+0x22/0x110)
 qemu-sys-7543  0D..1 14344us : vmcs_read32+0xb/0x20 
 (vmx_vcpu_run+0xc7/0x110)
 qemu-sys-7543  0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
 qemu-sys-7543  0D..1 14345us : vmcs_read32+0xb/0x20 
 (vmx_vcpu_run+0xf4/0x110)
 qemu-sys-7543  0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
 qemu-sys-7543  0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
 qemu-sys-7543  0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
 qemu-sys-7543  0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
 qemu-sys-7543  0D.h1 14351us : __spin_lock+0xc/0x30 
 (handle_level_irq+0x24/0x120)
 qemu-sys-7543  0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 
 (handle_level_irq+0x37/0x120)
 qemu-sys-7543  0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 
 (mask_and_ack_8259A+0x2a/0x120)
 qemu-sys-7543  0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 
 (mask_and_ack_8259A+0x7a/0x120)
 qemu-sys-7543  0D.h2 14358us : redirect_hardirq+0x8/0x70 
 (handle_level_irq+0x72/0x120)
 qemu-sys-7543  0D.h2 14358us : __spin_unlock+0xb/0x40 
 (handle_level_irq+0x8e/0x120)
 qemu-sys-7543  0D.h1 14358us : handle_IRQ_event+0xe/0x110 
 (handle_level_irq+0x9a/0x120)
 qemu-sys-7543  0D.h1 14359us : timer_interrupt+0xb/0x60 
 (handle_IRQ_event+0x67/0x110)
 qemu-sys-7543  0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 
 (timer_interrupt+0x20/0x60)
 ...

 One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
 and this while cyclictest runs at a period of 100 us!

 I got the same results over Adeos/I-pipe  Xenomai with the function
 tracer there, also pointing to the period while the CPU is in VM mode.

 Anyone any ideas? Greg, I put you on CC as you said you once saw decent
 latencies with your patches. Are there still magic bits missing in
 official kvm?
   
 No bits missing as far as I know.  It should just work.
 
 That could very well be the case these days.  I know back when I was
 looking at it, KVM would not run on VMX + -rt without modification or it
 would crash/hang (this was around the time I was working on that
 smp_function_call stuff).  And without careful modification it would run
 very poorly (with high (300us+ latencies) revealed in cyclictest.
 
 However, I was able to craft the vmx_vcpu_run path so that a VM could
 run side-by-side with cyclictest with sub 40us latencies.  In fact,
 normally it was sub 30us, but on an occasional run I would get a spike
 to ~37us.
 
 Unfortunately I am deep into other non-KVM related -rt issues at the
 moment, so I can't work on it any further for a bit.

Do you have some patch fragments left over? At least /me would be
interested to study and maybe forward port them. Or can you briefly
explain the issue above and/or the general problem behind this delay?

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a 

[kvm-devel] [ kvm-Bugs-1818600 ] Windows 2003 setup fails

2007-10-23 Thread SourceForge.net
Bugs item #1818600, was opened at 2007-10-23 16:51
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=893831aid=1818600group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Technologov (technologov)
Assigned to: Nobody/Anonymous (nobody)
Summary: Windows 2003 setup fails

Initial Comment:
Host: Fedora7, 64-bit, Intel CPU, KVM-48.

Windows 2003 setup crashes with BSOD. This happens with any combination of 
commands: -no-acpi, -no-kvm-irqchip, default, ...

Command:
./qemu-kvm -hda /isos/disks-vm/alexeye/Windows2003-test.vmdk -cdrom 
/isos/windows/Windows2003_r2_enterprise_cd1.iso -m 384 -boot d  -no-acpi 
-no-kvm-irqchip  

This bug seems to be old: it happened, perhaps, since KVM-37.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=893831aid=1818600group_id=180599

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] High vm-exit latencies during kvm boot-up/shutdown

2007-10-23 Thread Avi Kivity
Jan Kiszka wrote:
 Avi Kivity wrote:
   
 Jan Kiszka wrote:
 
 Avi,

 [somehow your mails do not get through to my private account, so I'm
 switching]

 Avi Kivity wrote:
   
   
 Jan Kiszka wrote:
 
 
 Clarification: I can't precisely tell what code is executed in VM mode,
 as I don't have qemu or that guest instrumented. I just see the kernel
 entering VM mode and leaving it again more than 300 us later. So I
 wonder why this is allowed while some external IRQ is pending.

   
   
   
 How do you know an external interrupt is pending?
 
 
 It's the host timer IRQ, programmed to fire in certain intervals (100 us
 here). Test case is some latency measurement tool like tglx's cyclictest
 or similar programs we use in Xenomai.

   
   
 kvm programs the hardware to exit when an external interrupt arrives.

 
 
 Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with 
 latest kvm from git hacked into (kvm generally seems to work fine this way):

 ...
 qemu-sys-7543  0...1 13897us : vmcs_write16+0xb/0x20 
 (vmx_save_host_state+0x1a7/0x1c0)
 qemu-sys-7543  0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
 qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
 (vmx_save_host_state+0xa0/0x1c0)
 qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
 (vmx_save_host_state+0xb0/0x1c0)
 qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
 (vmx_save_host_state+0xbf/0x1c0)
 qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
 (vmx_save_host_state+0xcf/0x1c0)
 qemu-sys-7543  0...1 13898us : load_msrs+0xb/0x40 
 (vmx_save_host_state+0xe7/0x1c0)
 qemu-sys-7543  0...1 13898us : kvm_load_guest_fpu+0x8/0x40 
 (kvm_vcpu_ioctl_run+0xbf/0x570)
 qemu-sys-7543  0D..1 13899us : vmx_vcpu_run+0xc/0x110 
 (kvm_vcpu_ioctl_run+0x120/0x570)
 qemu-sys-7543  0D..1 13899us!: vmcs_writel+0xb/0x30 
 (vmx_vcpu_run+0x22/0x110)
 qemu-sys-7543  0D..1 14344us : vmcs_read32+0xb/0x20 
 (vmx_vcpu_run+0xc7/0x110)
 qemu-sys-7543  0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
 qemu-sys-7543  0D..1 14345us : vmcs_read32+0xb/0x20 
 (vmx_vcpu_run+0xf4/0x110)
 qemu-sys-7543  0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
 qemu-sys-7543  0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
 qemu-sys-7543  0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
 qemu-sys-7543  0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
 qemu-sys-7543  0D.h1 14351us : __spin_lock+0xc/0x30 
 (handle_level_irq+0x24/0x120)
 qemu-sys-7543  0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 
 (handle_level_irq+0x37/0x120)
 qemu-sys-7543  0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 
 (mask_and_ack_8259A+0x2a/0x120)
 qemu-sys-7543  0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 
 (mask_and_ack_8259A+0x7a/0x120)
 qemu-sys-7543  0D.h2 14358us : redirect_hardirq+0x8/0x70 
 (handle_level_irq+0x72/0x120)
 qemu-sys-7543  0D.h2 14358us : __spin_unlock+0xb/0x40 
 (handle_level_irq+0x8e/0x120)
 qemu-sys-7543  0D.h1 14358us : handle_IRQ_event+0xe/0x110 
 (handle_level_irq+0x9a/0x120)
 qemu-sys-7543  0D.h1 14359us : timer_interrupt+0xb/0x60 
 (handle_IRQ_event+0x67/0x110)
 qemu-sys-7543  0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 
 (timer_interrupt+0x20/0x60)
 ...

 One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
 and this while cyclictest runs at a period of 100 us!

 I got the same results over Adeos/I-pipe  Xenomai with the function
 tracer there, also pointing to the period while the CPU is in VM mode.

 Anyone any ideas? Greg, I put you on CC as you said you once saw decent
 latencies with your patches. Are there still magic bits missing in
 official kvm?
   
   
 No bits missing as far as I know.  It should just work.

 Can you explain some more about the latenct tracer?  How does it work? 
 

 Ah, sorry: The latency tracers in both -rt and I-pipe use gcc's -pg to
 put a call to a function called mcount at the beginning of each compiled
 function. mcount is provided by the tracers and stores the caller
 address, its parent, the current time, and more in a log. An API is
 provided to start and stop the trace, e.g. after someone (kernel or user
 space) detected large wakeup latencies.

   

Ok.  So it is not interrupt driven, and that's how you get traces in
functions that run with interrupts disabled.

 Seeing vmx_vcpu_run() in there confuses me, as it always runs with
 interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).
 

 The point is that the cyclictest does not find large latencies when kvm
 is not happening to start or stop right now. And if you are thinking
 about NMIs triggered by the kvm code on vm-exit: I also instrumented
 that code path, and it is not taken in case of the long delay.

   

Right.  With your explanation it all makes sense, and indeed it looks
like the guest is not exiting.

 Please post a disassembly of your vmx_vcpu_run so we can interpret the
 offsets.
 

 

Re: [kvm-devel] High vm-exit latencies during kvm boot-up/shutdown

2007-10-23 Thread Jan Kiszka
Avi Kivity wrote:
 Jan Kiszka wrote:
 Avi Kivity wrote:
 Please post a disassembly of your vmx_vcpu_run so we can interpret the
 offsets.
 
 Here it comes:

 2df0 vmx_vcpu_run:
 2df0:   55  push   %ebp
 2df1:   89 e5   mov%esp,%ebp
 2df3:   53  push   %ebx
 2df4:   83 ec 08sub$0x8,%esp
 2df7:   e8 fc ff ff ff  call   2df8 vmx_vcpu_run+0x8
 2dfc:   8b 5d 08mov0x8(%ebp),%ebx
 2dff:   0f 20 c0mov%cr0,%eax
 2e02:   89 44 24 04 mov%eax,0x4(%esp)
 2e06:   c7 04 24 00 6c 00 00movl   $0x6c00,(%esp)
 2e0d:   e8 be d8 ff ff  call   6d0 vmcs_writel
   
 
 first trace
 
 
 2e12:   8b 83 80 0d 00 00   mov0xd80(%ebx),%eax
 2e18:   ba 14 6c 00 00  mov$0x6c14,%edx
 2e1d:   89 d9   mov%ebx,%ecx
 2e1f:   60  pusha
 2e20:   51  push   %ecx
 2e21:   0f 79 d4vmwrite %esp,%edx
 2e24:   83 f8 00cmp$0x0,%eax
 2e27:   8b 81 78 01 00 00   mov0x178(%ecx),%eax
 2e2d:   0f 22 d0mov%eax,%cr2
 2e30:   8b 81 50 01 00 00   mov0x150(%ecx),%eax
 2e36:   8b 99 5c 01 00 00   mov0x15c(%ecx),%ebx
 2e3c:   8b 91 58 01 00 00   mov0x158(%ecx),%edx
 2e42:   8b b1 68 01 00 00   mov0x168(%ecx),%esi
 2e48:   8b b9 6c 01 00 00   mov0x16c(%ecx),%edi
 2e4e:   8b a9 64 01 00 00   mov0x164(%ecx),%ebp
 2e54:   8b 89 54 01 00 00   mov0x154(%ecx),%ecx
 2e5a:   75 05   jne2e61 vmx_vcpu_run+0x71
 2e5c:   0f 01 c2vmlaunch
 2e5f:   eb 03   jmp2e64 vmx_vcpu_run+0x74
 2e61:   0f 01 c3vmresume
 2e64:   87 0c 24xchg   %ecx,(%esp)
 2e67:   89 81 50 01 00 00   mov%eax,0x150(%ecx)
 2e6d:   89 99 5c 01 00 00   mov%ebx,0x15c(%ecx)
 2e73:   ff 34 24pushl  (%esp)
 2e76:   8f 81 54 01 00 00   popl   0x154(%ecx)
 2e7c:   89 91 58 01 00 00   mov%edx,0x158(%ecx)
 2e82:   89 b1 68 01 00 00   mov%esi,0x168(%ecx)
 2e88:   89 b9 6c 01 00 00   mov%edi,0x16c(%ecx)
 2e8e:   89 a9 64 01 00 00   mov%ebp,0x164(%ecx)
 2e94:   0f 20 d0mov%cr2,%eax
 2e97:   89 81 78 01 00 00   mov%eax,0x178(%ecx)
 2e9d:   8b 0c 24mov(%esp),%ecx
 2ea0:   59  pop%ecx
 2ea1:   61  popa
 2ea2:   0f 96 c0setbe  %al
 2ea5:   88 83 84 0d 00 00   mov%al,0xd84(%ebx)
 2eab:   c7 04 24 24 48 00 00movl   $0x4824,(%esp)
 2eb2:   e8 49 d2 ff ff  call   100 vmcs_read32
   
 
 second trace
 
 2eb7:   a8 03   test   $0x3,%al
 2eb9:   0f 94 c0sete   %al
 2ebc:   0f b6 c0movzbl %al,%eax
 2ebf:   89 83 28 01 00 00   mov%eax,0x128(%ebx)
 2ec5:   b8 7b 00 00 00  mov$0x7b,%eax
 2eca:   8e d8   mov%eax,%ds
 2ecc:   8e c0   mov%eax,%es
 2ece:   c7 83 80 0d 00 00 01movl   $0x1,0xd80(%ebx)
 2ed5:   00 00 00
 2ed8:   c7 04 24 04 44 00 00movl   $0x4404,(%esp)
 2edf:   e8 1c d2 ff ff  call   100 vmcs_read32
 2ee4:   25 00 07 00 00  and$0x700,%eax
 2ee9:   3d 00 02 00 00  cmp$0x200,%eax
 2eee:   75 02   jne2ef2 vmx_vcpu_run+0x102
 2ef0:   cd 02   int$0x2
 2ef2:   83 c4 08add$0x8,%esp
 2ef5:   5b  pop%ebx
 2ef6:   5d  pop%ebp
 2ef7:   c3  ret
 2ef8:   90  nop
 2ef9:   8d b4 26 00 00 00 00lea0x0(%esi),%esi

 Note that the first, unresolved call here goes to mcount().

   
 
 (the -r option to objdump is handy)

Great, one never stops learning. :)

 
 Exiting on a pending interrupt is controlled by the vmcs word
 PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK.  Can you check (via
 vmcs_read32()) that the bit is indeed set?
 
 [if not, a guest can just enter a busy loop and kill a processor]
 

I traced it right before and after the asm block, and in all cases
(including those with low latency exits) it's just 0x1f, which should be
fine.

Earlier, I also checked GUEST_INTERRUPTIBILITY_INFO and
GUEST_ACTIVITY_STATE, but found neither some suspicious state nor a

Re: [kvm-devel] [kvm-ppc-devel] [PATCH 1/2] [v1] consolidate i386 x86-64 user dir make rules

2007-10-23 Thread Hollis Blanchard
On Tue, 2007-10-23 at 11:06 -0500, Jerone Young wrote:
 
   PATCH 2 moves all the rest of the flatfile rules to
   config-x86-common.mak. So nothing gets clobbered.
  
  But THIS patch clobbers it.
 
 Oh, It does not clobber flatfiles. It uses it. So in
 config-x86-common.mak there is a rule
 
 flatfiles: $(flatfiles-common) $(flatfiles)
 
 Acutally I thought the same too. But this is actually in the original
 makefile. It doesn't clobber it apparently. But for saftey I'll change
 the name of this rule as it shouldn't be the same name as variable.

My mistake again. However, it's a little confusing that there is a
variable *and* a target named flatfiles...

-- 
Hollis Blanchard
IBM Linux Technology Center


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


[kvm-devel] [PATCH] Enable memory mapped TPR shadow(FlexPriority)

2007-10-23 Thread Yang, Sheng
From ac4dd1782b9f0f51e0c366a1b8db4515d6828df8 Mon Sep 17 00:00:00 2001
From: Sheng Yang [EMAIL PROTECTED]
Date: Tue, 23 Oct 2007 12:34:42 +0800
Subject: [PATCH] Enable memory mapped TPR shadow(FlexPriority)

This patch based on CR8/TPR patch before, and enable the TPR
shadow(FlexPriority) for 32bit Windows. Since TPR is accessed
very frequently by 32bit Windows, especially SMP guest, with
FlexPriority enabled, we saw significant performance gain.

BTW: The patch also using one memslot to get determined p2m
relationship. But it's
not elegant, which can be improved in the future.

Signed-off-by: Sheng Yang [EMAIL PROTECTED]
---
 drivers/kvm/kvm.h |8 +++-
 drivers/kvm/kvm_main.c|   35 +++
 drivers/kvm/vmx.c |  105
+---
 drivers/kvm/vmx.h |3 +
 drivers/kvm/x86_emulate.c |   11 +
 drivers/kvm/x86_emulate.h |4 ++
 6 files changed, 147 insertions(+), 19 deletions(-)

diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 08b5b21..0751f8e 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -379,6 +379,7 @@ struct kvm {
struct kvm_pic *vpic;
struct kvm_ioapic *vioapic;
int round_robin_prev_vcpu;
+   struct page *apic_access_page;
 };
 
 static inline struct kvm_pic *pic_irqchip(struct kvm *kvm)
@@ -503,6 +504,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm
*kvm, int slot);
 void kvm_mmu_zap_all(struct kvm *kvm);
 void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int
kvm_nr_mmu_pages);
 
+int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
+ struct
+ kvm_userspace_memory_region *mem,
+ int user_alloc);
+
 hpa_t gpa_to_hpa(struct kvm *kvm, gpa_t gpa);
 #define HPA_MSB ((sizeof(hpa_t) * 8) - 1)
 #define HPA_ERR_MASK ((hpa_t)1  HPA_MSB)
@@ -535,7 +541,7 @@ enum emulation_result {
 };
 
 int emulate_instruction(struct kvm_vcpu *vcpu, struct kvm_run *run,
-   unsigned long cr2, u16 error_code, int
no_decode);
+   unsigned long cr2, u16 error_code, int
cmd_type);
 void kvm_report_emulation_failure(struct kvm_vcpu *cvpu, const char
*context);
 void realmode_lgdt(struct kvm_vcpu *vcpu, u16 size, unsigned long
address);
 void realmode_lidt(struct kvm_vcpu *vcpu, u16 size, unsigned long
address);
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 6f7b31e..afcd84b 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -643,10 +643,10 @@ EXPORT_SYMBOL_GPL(fx_init);
  *
  * Discontiguous memory is allowed, mostly for framebuffers.
  */
-static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
- struct
- kvm_userspace_memory_region
*mem,
- int user_alloc)
+int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
+ struct
+ kvm_userspace_memory_region *mem,
+ int user_alloc)
 {
int r;
gfn_t base_gfn;
@@ -776,6 +776,7 @@ out_unlock:
 out:
return r;
 }
+EXPORT_SYMBOL_GPL(kvm_vm_ioctl_set_memory_region);
 
 static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
  u32 kvm_nr_mmu_pages)
@@ -1252,14 +1253,21 @@ static int emulator_read_emulated(unsigned long
addr,
memcpy(val, vcpu-mmio_data, bytes);
vcpu-mmio_read_completed = 0;
return X86EMUL_CONTINUE;
-   } else if (emulator_read_std(addr, val, bytes, vcpu)
-  == X86EMUL_CONTINUE)
-   return X86EMUL_CONTINUE;
+   }
 
gpa = vcpu-mmu.gva_to_gpa(vcpu, addr);
+
+   /* For APIC access vmexit */
+   if ((gpa  PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
+   goto mmio;
+
+   if (emulator_read_std(addr, val, bytes, vcpu)
+   == X86EMUL_CONTINUE)
+   return X86EMUL_CONTINUE;
if (gpa == UNMAPPED_GVA)
return X86EMUL_PROPAGATE_FAULT;
 
+mmio:
/*
 * Is this MMIO handled locally?
 */
@@ -1297,6 +1305,10 @@ static int
emulator_write_emulated_onepage(unsigned long addr,
struct kvm_io_device *mmio_dev;
gpa_t gpa = vcpu-mmu.gva_to_gpa(vcpu, addr);
 
+   /* For APIC access vmexit */
+   if ((gpa  PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
+   goto mmio;
+
if (gpa == UNMAPPED_GVA) {
kvm_x86_ops-inject_page_fault(vcpu, addr, 2);
return X86EMUL_PROPAGATE_FAULT;
@@ -1305,6 +1317,7 @@ static int
emulator_write_emulated_onepage(unsigned long addr,
if (emulator_write_phys(vcpu, gpa, val, bytes))
return X86EMUL_CONTINUE;
 
+mmio:
/*
 * Is this MMIO handled locally?
 */
@@ -1435,7 +1448,7 @@ int 

Re: [kvm-devel] [kvm-ppc-devel] [PATCH 1 of 2] [mq]: x86_user_make_changes

2007-10-23 Thread Jerone Young
ok I can resend this anyway . I forgot to consolidate the kvmctl stuff.

On Tue, 2007-10-23 at 16:10 -0500, Hollis Blanchard wrote:
 On Tue, 2007-10-23 at 15:41 -0500, Jerone Young wrote:
  
  +flatfiles_tests-common = test/bootstrap test/vmexit.flat test/smp.flat
 
 Instead of flatfiles_tests-common (getting a little long, no?), I
 would just call them tests and tests-common or tests-x86. We're
 going to have non-flat test binaries in the near future anyways...
 


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] [PATCH] Enable memory mapped TPR shadow(FlexPriority)

2007-10-23 Thread Yang, Sheng
Another comment: I forgot if I answer the question on why eip should
move backward. 
I did it because some instruction like mov will move eip to skip some
dst/src operand 
when executing, so eip should be kept for consistency.

Yang, Sheng wrote:
 From ac4dd1782b9f0f51e0c366a1b8db4515d6828df8 Mon Sep 17 00:00:00 2001
 From: Sheng Yang [EMAIL PROTECTED]
 Date: Tue, 23 Oct 2007 12:34:42 +0800
 Subject: [PATCH] Enable memory mapped TPR shadow(FlexPriority)
 
 This patch based on CR8/TPR patch before, and enable the TPR
 shadow(FlexPriority) for 32bit Windows. Since TPR is accessed
 very frequently by 32bit Windows, especially SMP guest, with
 FlexPriority enabled, we saw significant performance gain.
 
 BTW: The patch also using one memslot to get determined p2m
relationship.
 But it's 
 not elegant, which can be improved in the future.
 
 Signed-off-by: Sheng Yang [EMAIL PROTECTED]
 ---
  drivers/kvm/kvm.h |8 +++-
  drivers/kvm/kvm_main.c|   35 +++
  drivers/kvm/vmx.c |  105
  +--- drivers/kvm/vmx.h
|  
  3 + drivers/kvm/x86_emulate.c |   11 +
  drivers/kvm/x86_emulate.h |4 ++
  6 files changed, 147 insertions(+), 19 deletions(-)
 
 diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
 index 08b5b21..0751f8e 100644
 --- a/drivers/kvm/kvm.h
 +++ b/drivers/kvm/kvm.h
 @@ -379,6 +379,7 @@ struct kvm {
   struct kvm_pic *vpic;
   struct kvm_ioapic *vioapic;
   int round_robin_prev_vcpu;
 + struct page *apic_access_page;
  };
 
  static inline struct kvm_pic *pic_irqchip(struct kvm *kvm)
 @@ -503,6 +504,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm
*kvm,
  int slot); void kvm_mmu_zap_all(struct kvm *kvm);
  void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int
 kvm_nr_mmu_pages); 
 
 +int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 +   struct
 +   kvm_userspace_memory_region *mem,
 +   int user_alloc);
 +
  hpa_t gpa_to_hpa(struct kvm *kvm, gpa_t gpa);
  #define HPA_MSB ((sizeof(hpa_t) * 8) - 1)
  #define HPA_ERR_MASK ((hpa_t)1  HPA_MSB)
 @@ -535,7 +541,7 @@ enum emulation_result {
  };
 
  int emulate_instruction(struct kvm_vcpu *vcpu, struct kvm_run *run,
 - unsigned long cr2, u16 error_code, int
no_decode);
 + unsigned long cr2, u16 error_code, int
cmd_type);
  void kvm_report_emulation_failure(struct kvm_vcpu *cvpu, const char
  *context); void realmode_lgdt(struct kvm_vcpu *vcpu, u16 size,
unsigned
  long address); void realmode_lidt(struct kvm_vcpu *vcpu, u16 size,
 unsigned long address); 
 diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
 index 6f7b31e..afcd84b 100644
 --- a/drivers/kvm/kvm_main.c
 +++ b/drivers/kvm/kvm_main.c
 @@ -643,10 +643,10 @@ EXPORT_SYMBOL_GPL(fx_init);
   *
   * Discontiguous memory is allowed, mostly for framebuffers.
   */
 -static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 -   struct
 -   kvm_userspace_memory_region
*mem,
 -   int user_alloc)
 +int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 +   struct
 +   kvm_userspace_memory_region *mem,
 +   int user_alloc)
  {
   int r;
   gfn_t base_gfn;
 @@ -776,6 +776,7 @@ out_unlock:
  out:
   return r;
  }
 +EXPORT_SYMBOL_GPL(kvm_vm_ioctl_set_memory_region);
 
  static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
 u32 kvm_nr_mmu_pages)
 @@ -1252,14 +1253,21 @@ static int emulator_read_emulated(unsigned
long
   addr, memcpy(val, vcpu-mmio_data, bytes);
   vcpu-mmio_read_completed = 0;
   return X86EMUL_CONTINUE;
 - } else if (emulator_read_std(addr, val, bytes, vcpu)
 -== X86EMUL_CONTINUE)
 - return X86EMUL_CONTINUE;
 + }
 
   gpa = vcpu-mmu.gva_to_gpa(vcpu, addr);
 +
 + /* For APIC access vmexit */
 + if ((gpa  PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
 + goto mmio;
 +
 + if (emulator_read_std(addr, val, bytes, vcpu)
 + == X86EMUL_CONTINUE)
 + return X86EMUL_CONTINUE;
   if (gpa == UNMAPPED_GVA)
   return X86EMUL_PROPAGATE_FAULT;
 
 +mmio:
   /*
* Is this MMIO handled locally?
*/
 @@ -1297,6 +1305,10 @@ static int
emulator_write_emulated_onepage(unsigned
   long addr, struct kvm_io_device *mmio_dev;
   gpa_t gpa = vcpu-mmu.gva_to_gpa(vcpu, addr);
 
 + /* For APIC access vmexit */
 + if ((gpa  PAGE_MASK) == APIC_DEFAULT_PHYS_BASE)
 + goto mmio;
 +
   if (gpa == UNMAPPED_GVA) {
   kvm_x86_ops-inject_page_fault(vcpu, addr, 2);
   return X86EMUL_PROPAGATE_FAULT;
 @@ -1305,6 +1317,7 @@ static int

[kvm-devel] [PATCH 0 of 2] [v2] Another attempt to Consolidate x86 makefiles tests

2007-10-23 Thread jyoung5
This is the acutally the 3rd attempt.

These patches consolidate the make files  tests in user director for x86. This
is to allow other architectures easier integration into the kvm source.

Signed-off-by: Jerone Young [EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel