-Original Message-
From: David Miller [mailto:da...@davemloft.net]
Sent: Thursday, November 11, 2010 1:47 AM
To: Xin, Xiaohui
Cc: net...@vger.kernel.org; kvm@vger.kernel.org; linux-ker...@vger.kernel.org;
m...@redhat.com; mi...@elte.hu; herb...@gondor.apana.org.au;
jd...@linux.intel.com
On Thursday, November 11, 2010 02:31:06 am Avi Kivity wrote:
Daniel, the buildbot has been fairly effective in keeping qemu-kvm.git
building. I'd like to extend that to kvm.git, especially for non-x86
architectures.
[...]
Can you help with this?
Sure. I'll look into that next week.
Best
On 11/11/2010 02:56 AM, Huang Ying wrote:
On Thu, 2010-11-11 at 00:49 +0800, Anthony Liguori wrote:
On 11/10/2010 02:34 AM, Avi Kivity wrote:
Why the gpa - hva mapping is not
consistent for RAM if -mempath is not used?
Video RAM in the range a-b and PCI mapped RAM can
On 11/10/10 18:34, Michael S. Tsirkin wrote:
On Wed, Nov 10, 2010 at 07:14:15PM +0200, Gleb Natapov wrote:
Signed-off-by: Gleb Natapovg...@redhat.com
Good stuff. We should also consider using this for
CLI and monitor. Some comments below.
Oh, we already have a table to map pci classes to
Hi,
register_ioport_write (s-port, 1, 1, gus_writeb, s);
register_ioport_write (s-port, 1, 2, gus_writew, s);
+isa_init_ioport_range(dev, s-port, 2);
register_ioport_read ((s-port + 0x100) 0xf00, 1, 1, gus_readb, s);
register_ioport_read ((s-port + 0x100) 0xf00,
On Thu, Nov 11, 2010 at 11:07:01AM +0100, Gerd Hoffmann wrote:
On 11/10/10 18:34, Michael S. Tsirkin wrote:
On Wed, Nov 10, 2010 at 07:14:15PM +0200, Gleb Natapov wrote:
Signed-off-by: Gleb Natapovg...@redhat.com
Good stuff. We should also consider using this for
CLI and monitor. Some
We now use load_gs_index() to load gs safely; unfortunately this also
changes MSR_KERNEL_GS_BASE, which we managed separately. This resulted
in confusion and breakage running 32-bit host userspace on a 64-bit kernel.
Fix by
- saving guest MSR_KERNEL_GS_BASE before we we reload the host's gs
-
Oh, we already have a table to map pci classes to descriptions for
'info pci'. I'd strongly suggest to just add the fw names to that
table instead of creating a second one ...
Do you mean pci_class_descriptions?
Exactly.
For some classes open firmware spec
defines single name for all
In order to track a regression that happens using kvmclock
in guests, add test of the clock_getres() syscall, to see
what is the clocksource resolution reported by the guest.
This, combined with variants that set different -cpu params
for the qemu command line, will give us the desired outcome.
From: Prasad Joshi
Sent: 10 November 2010 13:01
To: Stefan Hajnoczi
Cc: Keqin Hong; kvm@vger.kernel.org
Subject: RE: Unable to start VM using COWed image
From: Stefan Hajnoczi [stefa...@gmail.com]
Sent: 10 November 2010 12:47
To: Prasad Joshi
Cc: Keqin Hong; kvm@vger.kernel.org
Subject:
On Thu, Nov 11, 2010 at 12:17 PM, Prasad Joshi
p.g.jo...@student.reading.ac.uk wrote:
Though specifying the absolute path for source image worked for me.
Can any one please let me know the situation in which one would not want to
specify the absolute path?
How does relative path help?
This is a rewrite of the virtio-ioeventfd patchset to work at the virtio-pci.c
level instead of virtio.c. This results in better integration with the
host/guest notifier code and makes the code simpler (no more state machine).
Virtqueue notify is currently handled synchronously in userspace
Virtqueue notify is currently handled synchronously in userspace virtio. This
prevents the vcpu from executing guest code while hardware emulation code
handles the notify.
On systems that support KVM, the ioeventfd mechanism can be used to make
virtqueue notify a lightweight exit by deferring
The VirtIOPCIProxy bugs field is currently used to enable workarounds
for older guests. Rename it to flags so that other per-device behavior
can be tracked.
A later patch uses the flags field to remember whether ioeventfd should
be used for virtqueue host notification.
Signed-off-by: Stefan
There used to be a limit of 6 KVM io bus devices inside the kernel. On
such a kernel, don't use ioeventfd for virtqueue host notification since
the limit is reached too easily. This ensures that existing vhost-net
setups (which always use ioeventfd) have ioeventfds available so they
can continue
On 10/15/2010 09:11 AM, André Weidemann wrote:
Hi Federico,
On 15.06.2010 18:18, Fede wrote:
On Wed, Jun 9, 2010 at 18:51, Adhyas Avasthiadh...@gmail.com wrote:
I read an old email thread which talked about GPGPU passthroughin
linux-kvm. Was this implemented?
If not, are there some quick
On Tue, Oct 26, 2010 at 03:10:58PM +0200, Nadav Har'El wrote:
I put copies of all above mentioned documents (in case there's
difficulty in finding them), in
http://www.math.technion.ac.il/~nyh/nested/
Very interesting. Thanks for the overview on that topic.
Joerg
--
To
Signed-off-by: Pradeep Kumar psuri...@linux.vnet.ibm.com
---
client/tests/kvm/tests_base.cfg.sample | 26 +-
client/tests/kvm/unattended/RHEL-6-series.ks | 37 ++
2 files changed, 62 insertions(+), 1 deletions(-)
create mode 100644
From: Jason Wang jasow...@redhat.com
Add a new subtest to check whether kdump work correctly in guest. This test just
try to trigger crash on each vcpu and then verify it by checking the vmcore.
Signed-off-by: Jason Wang jasow...@redhat.com
---
client/tests/kvm/tests/kdump.py |
On Tue, Nov 09, 2010 at 04:15:42PM +0200, Avi Kivity wrote:
- if (nr == BP_VECTOR !svm_has(SVM_FEATURE_NRIP)) {
+ if (nr == BP_VECTOR !static_cpu_has(X86_FEATURE_NRIPS)) {
What is static_cpu_has and why you use it only here and boot_cpu_has
in all other places?
Joerg
--
To
On 11/11/2010 04:46 PM, Joerg Roedel wrote:
On Tue, Nov 09, 2010 at 04:15:42PM +0200, Avi Kivity wrote:
- if (nr == BP_VECTOR !svm_has(SVM_FEATURE_NRIP)) {
+ if (nr == BP_VECTOR !static_cpu_has(X86_FEATURE_NRIPS)) {
What is static_cpu_has
It's like boot_cpu_has, only it works by
On Thu, Nov 11, 2010 at 11:07:01AM +0100, Gerd Hoffmann wrote:
On 11/10/10 18:34, Michael S. Tsirkin wrote:
On Wed, Nov 10, 2010 at 07:14:15PM +0200, Gleb Natapov wrote:
Signed-off-by: Gleb Natapovg...@redhat.com
Good stuff. We should also consider using this for
CLI and monitor. Some
On Thu, Nov 11, 2010 at 04:50:10PM +0200, Avi Kivity wrote:
On 11/11/2010 04:46 PM, Joerg Roedel wrote:
On Tue, Nov 09, 2010 at 04:15:42PM +0200, Avi Kivity wrote:
- if (nr == BP_VECTOR !svm_has(SVM_FEATURE_NRIP)) {
+ if (nr == BP_VECTOR !static_cpu_has(X86_FEATURE_NRIPS)) {
What is
On Thu, Nov 11, 2010 at 01:47:21PM +, Stefan Hajnoczi wrote:
Virtqueue notify is currently handled synchronously in userspace virtio. This
prevents the vcpu from executing guest code while hardware emulation code
handles the notify.
On systems that support KVM, the ioeventfd mechanism
On Thu, Nov 11, 2010 at 05:05:11PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 11:07:01AM +0100, Gerd Hoffmann wrote:
On 11/10/10 18:34, Michael S. Tsirkin wrote:
On Wed, Nov 10, 2010 at 07:14:15PM +0200, Gleb Natapov wrote:
Signed-off-by: Gleb Natapovg...@redhat.com
On Thu, Nov 11, 2010 at 03:47:00PM +0800, Sheng Yang wrote:
Add marco for big-endian machine.(Untested!)
Signed-off-by: Sheng Yang sh...@linux.intel.com
I presume this is tested at the same level as the previous patch?
So you want to fold this into the previous patch.
Also, please build with
On Thu, Nov 11, 2010 at 01:47:21PM +, Stefan Hajnoczi wrote:
Some virtio devices are known to have guest drivers which expect a notify to
be
processed synchronously and spin waiting for completion. Only enable
ioeventfd
for virtio-blk and virtio-net for now.
Who guarantees that less
On 11/11/2010 03:47 PM, Stefan Hajnoczi wrote:
Some virtio devices are known to have guest drivers which expect a notify to be
processed synchronously and spin waiting for completion. Only enable ioeventfd
for virtio-blk and virtio-net for now.
Which drivers are these?
I only know of the
On Thu, Nov 11, 2010 at 06:59:29PM +0200, Avi Kivity wrote:
On 11/11/2010 03:47 PM, Stefan Hajnoczi wrote:
Some virtio devices are known to have guest drivers which expect a notify to
be
processed synchronously and spin waiting for completion. Only enable
ioeventfd
for virtio-blk and
On Thu, Nov 11, 2010 at 06:07:53PM +0200, Gleb Natapov wrote:
On Thu, Nov 11, 2010 at 05:05:11PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 11:07:01AM +0100, Gerd Hoffmann wrote:
On 11/10/10 18:34, Michael S. Tsirkin wrote:
On Wed, Nov 10, 2010 at 07:14:15PM +0200, Gleb
On Thu, 11 Nov 2010 15:46:55 +0800
Sheng Yang sh...@linux.intel.com wrote:
Then we can use it instead of magic number 1.
Reviewed-by: Hidetoshi Seto seto.hideto...@jp.fujitsu.com
Cc: Matthew Wilcox wi...@linux.intel.com
Cc: Jesse Barnes jbar...@virtuousgeek.org
Cc:
On Thu, Nov 11, 2010 at 07:12:40PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 06:59:29PM +0200, Avi Kivity wrote:
On 11/11/2010 03:47 PM, Stefan Hajnoczi wrote:
Some virtio devices are known to have guest drivers which expect a notify
to be
processed synchronously and spin
Hello All,
I have question on code of rmap_add
Here is the code of the function
613 static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
614 {
624 rmapp = gfn_to_rmap(vcpu-kvm, gfn, sp-role.level);
625 if (!*rmapp) {
626 rmap_printk(rmap_add: %p %llx 0-1\n,
On Thu, Nov 11, 2010 at 01:47:19PM +, Stefan Hajnoczi wrote:
This is a rewrite of the virtio-ioeventfd patchset to work at the virtio-pci.c
level instead of virtio.c. This results in better integration with the
host/guest notifier code and makes the code simpler (no more state machine).
On Thu, Nov 11, 2010 at 10:21 AM, Gerd Hoffmann kra...@redhat.com wrote:
On 11/10/10 18:14, Gleb Natapov wrote:
This is current sate of the patch series for people to comment on.
I am using open firmware naming scheme to specify device path names.
Names look like this on pci machine:
Il giorno Gio 04 Nov 2010 14:04:59 CET, Martin Maurer ha scritto:
Hi,
Before you begin, prepare the running w2k to use ide disk (and boot on KVM with
ide disks)
For w2k I followed this link (solution 2 worked for me):
http://www.motherboard.windowsreinstall.com/problems.htm
And I am using
On Thu, Nov 11, 2010 at 06:38:47PM +, Prasad Joshi wrote:
Hello All,
I have question on code of rmap_add
Here is the code of the function
613 static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
614 {
624 rmapp = gfn_to_rmap(vcpu-kvm, gfn, sp-role.level);
625
On Fri, Oct 15, 2010 at 8:54 PM, Christian Brunner c...@muc.de wrote:
Hi,
once again, Yehuda committed fixes for all the suggestions made on the
list (and more). Here is the next update for the ceph/rbd block driver.
Please let us know if there are any pending issues.
For those who didn't
On Thu, 2010-11-11 at 17:39 +0800, Avi Kivity wrote:
On 11/11/2010 02:56 AM, Huang Ying wrote:
On Thu, 2010-11-11 at 00:49 +0800, Anthony Liguori wrote:
On 11/10/2010 02:34 AM, Avi Kivity wrote:
Why the gpa - hva mapping is not
consistent for RAM if -mempath is not used?
On Friday 12 November 2010 00:12:21 Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 03:47:00PM +0800, Sheng Yang wrote:
Add marco for big-endian machine.(Untested!)
Signed-off-by: Sheng Yang sh...@linux.intel.com
I presume this is tested at the same level as the previous patch?
So
This series attempts to clean up capability support between common
code and device assignment. In doing so, we can move existing MSI
MSI-X capabilities to offsets matching real hardware, and further
enable more capabilities to be exposed.
The last patch is only for RFC, I'd like some input on
Make use of wmask, just like the rest of config space.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/pci.c | 19 ---
1 files changed, 8 insertions(+), 11 deletions(-)
diff --git a/hw/pci.c b/hw/pci.c
index 92aaa85..12c47ac 100644
--- a/hw/pci.c
+++
This interface doesn't make much sense, adding a capability can
take care of everything, just provide a means to register
capability read/write handlers.
Device assignment does it's own thing, so requires a couple
ugly hacks that will be cleaned by subsequent patches.
Signed-off-by: Alex
Convert to use common pci_add_capabilities() rather than creating
our own mess.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/device-assignment.c | 112 +++-
1 files changed, 63 insertions(+), 49 deletions(-)
diff --git
Capabilities are allocated in bytes, so we can track both whether
a byte is used and by what capability in the same structure.
Remove pci_reserve_capability() as there are no users.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/pci.c | 16 +---
hw/pci.h |6
Capabilities aren't required to be contiguous, so cap.length never
really made much sense. Likewise, cap.start is mostly meaningless
too. Both of these are better served by the capability map. We
can also get rid of cap.supported, since it's really now unused
and redundant with flag in the
Now that common PCI code doesn't have a hangup on capabilities
being contiguous, move assigned device capabilities to match
their offset on physical hardware. This helps for drivers that
assume a capability configuration and don't bother searching.
We can also remove several calls to
Any handlers that actually want to interact with specific capabilities
are going to want to know the capability ID being accessed. With the
capability map, this is readily available, so we can save handlers the
trouble of figuring it out.
Signed-off-by: Alex Williamson alex.william...@redhat.com
Some drivers depend on finding capabilities like power management,
PCI express/X, vital product data, or vendor specific fields. Now
that we have better capability support, we can pass more of these
tables through to the guest. Note that VPD and VNDR are direct pass
through capabilies, the rest
On Thu, Nov 11, 2010 at 07:55:01PM -0700, Alex Williamson wrote:
Make use of wmask, just like the rest of config space.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/pci.c | 19 ---
1 files changed, 8 insertions(+), 11 deletions(-)
diff --git
On Thu, Nov 11, 2010 at 07:56:46PM -0700, Alex Williamson wrote:
Some drivers depend on finding capabilities like power management,
PCI express/X, vital product data, or vendor specific fields. Now
that we have better capability support, we can pass more of these
tables through to the guest.
On Thu, Nov 11, 2010 at 07:54:49PM -0700, Alex Williamson wrote:
This series attempts to clean up capability support between common
code and device assignment. In doing so, we can move existing MSI
MSI-X capabilities to offsets matching real hardware, and further
enable more capabilities to
On Thu, Nov 11, 2010 at 07:55:43PM -0700, Alex Williamson wrote:
Capabilities are allocated in bytes, so we can track both whether
a byte is used and by what capability in the same structure.
Remove pci_reserve_capability() as there are no users.
Signed-off-by: Alex Williamson
On Fri, 2010-11-12 at 07:22 +0200, Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 07:55:01PM -0700, Alex Williamson wrote:
Make use of wmask, just like the rest of config space.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/pci.c | 19 ---
On Fri, 2010-11-12 at 07:40 +0200, Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 07:55:43PM -0700, Alex Williamson wrote:
Capabilities are allocated in bytes, so we can track both whether
a byte is used and by what capability in the same structure.
Remove pci_reserve_capability() as
On Fri, 2010-11-12 at 07:36 +0200, Michael S. Tsirkin wrote:
On Thu, Nov 11, 2010 at 07:56:46PM -0700, Alex Williamson wrote:
Some drivers depend on finding capabilities like power management,
PCI express/X, vital product data, or vendor specific fields. Now
that we have better capability
Add AUDIT_POST_SYNC audit for long mode shadow page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5275c50..f3fad4f 100644
---
If CR0.PG is changed, the page fault cann't be avoid when the prefault address
is accessed later
And it also fix a bug: it can retry a page enabled #PF in page disabled context
if mmu is shadow page
This idear is from Gleb Natapov
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
Let's support apf for nonpaing guest
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 12 +---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f3fad4f..5ee5b97 100644
---
Retry #PF for softmmu only when the current vcpu has the same
root shadow page as the time when #PF occurs. it means they
have same paging environment
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |6 ++
arch/x86/kvm/mmu.c |
If apf is generated in L2 guest and is completed in L1 guest, it will
prefault this apf in L1 guest's mmu context.
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/x86.c |
61 matches
Mail list logo