On 04/02/2010 07:41 AM, Marek Olszewski wrote:
When a guest OS writes to a shadowed (and therefore page protected)
guest page table, does the resulting page fault get handled in
paging_tmpl.h:xxx_page_fault or does it call some rmap related code
directly?
page faults are dispatched to the
On 04/02/2010 12:27 AM, Tom Lyon wrote:
kvm really wants the event counter to be an eventfd, that allows hooking
it directly to kvm (which can inject an interrupt on an eventfd_signal),
can you adapt your patch to do this?
I looked further into eventfds - they seem the perfect solution
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as sendmsg/recvmsg to vhost-net to
send/recv directly to/from the NIC driver. KVM
From: Xin Xiaohui xiaohui@intel.com
Add a device to utilize the vhost-net backend driver for
copy-less data transfer between guest FE and host NIC.
It pins the guest user space to the host memory and
provides proto_ops as sendmsg/recvmsg to vhost-net.
Signed-off-by: Xin Xiaohui
From: Xin Xiaohui xiaohui@intel.com
The vhost-net backend now only supports synchronous send/recv
operations. The patch provides multiple submits and asynchronous
notifications. This is needed for zero-copy case.
Signed-off-by: Xin Xiaohui xiaohui@intel.com
---
drivers/vhost/net.c |
From: Xin Xiaohui xiaohui@intel.com
The patch let host NIC driver to receive user space skb,
then the driver has chance to directly DMA to guest user
space buffers thru single ethX interface.
We want it to be more generic as a zero copy framework.
Signed-off-by: Xin Xiaohui
On Fri, 2 Apr 2010 15:30:10 +0800
xiaohui@intel.com wrote:
From: Xin Xiaohui xiaohui@intel.com
The patch let host NIC driver to receive user space skb,
then the driver has chance to directly DMA to guest user
space buffers thru single ethX interface.
We want it to be more generic
On Fri, Apr 02, 2010 at 09:43:35AM +0300, Avi Kivity wrote:
On 04/01/2010 10:24 PM, Tom Lyon wrote:
But there are multiple msi-x interrupts, how do you know which one
triggered?
You don't. This would suck for KVM, I guess, but we'd need major rework of
the
generic UIO stuff to have
Make vhost scalable by creating a separate vhost thread per vhost
device. This provides better scaling across multiple guests and with
multiple interfaces in a guest.
I am seeing better aggregated througput/latency when running netperf
across multiple guests or multiple interfaces in a guest in
Hello All,
I'm interested in adding nested VMX support to KVM in GSoC 2010 (among
other things). I see that Orit Wasserman has done some work in this
area, but it didn't get merged yet. The last patches were a few months
ago and I have not seen any substantial progress in that front ever
since.
I
Nope, both Kernels are 64 bit.
uname -a Host: Linux gordon 2.6.27-gentoo-r8 #5 Sat Mar 14 18:01:59 GMT
2009 x86_64 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux
uname -a Guest: Linux andrew 2.6.28-hardened-r9 #4 Mon Jan 18 22:39:31
GMT 2010 x86_64 AMD Athlon(tm) 64 Processor 3200+
On Fri, 2010-04-02 at 15:25 +0800, xiaohui@intel.com wrote:
The idea is simple, just to pin the guest VM user space and then
let host NIC driver has the chance to directly DMA to it.
The patches are based on vhost-net backend driver. We add a device
which provides proto_ops as
12 matches
Mail list logo