EHCI / USB2.0 for USB passthrough, or how to pass USB host device

2010-05-08 Thread Tom Lanyon
Hi list,

I've been playing with some KVM guests on KVM 83 on a RedHat 2.6.18 kernel 
(2.6.18-164.15.1.el5).

I tried to pass through a USB TV tuner device with a hostdev option in the 
guest's configuration. The guest can see the device but the driver 
(dvb_usb_dib0700) refuses to initialise it since it detected QEMU emulating a 
USB 1.1 host and needs USB 2.0:

dvb-usb: This USB2.0 device cannot be run on a USB1.1 port. (it lacks a 
hardware PID filter)

Instead, and as this is the only USB device on the host, I tried to pass 
through the whole USB host controller to the guest via PCI pass through.

There's three functions provided by the USB controller's PCI device:
01:08.0 USB Controller: VIA Technologies, Inc. VT82x UHCI USB 1.1 
Controller (rev 62)
01:08.1 USB Controller: VIA Technologies, Inc. VT82x UHCI USB 1.1 
Controller (rev 62)
01:08.2 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 65)

so I tried to pass the USB 2.0 (01:08.2) function to the guest but received an 
error when trying to start the guest:

error: this function is not supported by the hypervisor: No PCI reset 
capability available for :01:08.2

I figured this was because I was only trying to pass one function of a 
multi-function device, so I tried passing all three functions concurrently but 
received the same 'PCI reset capability' error.

So, is there a way to emulate a USB 2.0 / EHCI controller in a guest and pass 
my USB device through? or alternatively, can anyone suggest how to get the PCI 
device(s) passed through for the physical USB controller?

Thanks,
Tom--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC][PATCH v4 00/18] Provide a zero-copy method on KVM virtio-net.

2010-05-08 Thread Xin, Xiaohui
Michael,
Sorry, somehow I missed this mail. :-(

 Here, we have ever considered 2 ways to utilize the page constructor
 API to dispense the user buffers.
 
 One: Modify __alloc_skb() function a bit, it can only allocate a 
  structure of sk_buff, and the data pointer is pointing to a 
  user buffer which is coming from a page constructor API.
  Then the shinfo of the skb is also from guest.
  When packet is received from hardware, the skb-data is filled
  directly by h/w. What we have done is in this way.
 
  Pros:   We can avoid any copy here.
  Cons:   Guest virtio-net driver needs to allocate skb as almost
  the same method with the host NIC drivers, say the size
  of netdev_alloc_skb() and the same reserved space in the
  head of skb. Many NIC drivers are the same with guest and
  ok for this. But some lastest NIC drivers reserves special
  room in skb head. To deal with it, we suggest to provide
  a method in guest virtio-net driver to ask for parameter
  we interest from the NIC driver when we know which device 
  we have bind to do zero-copy. Then we ask guest to do so.
  Is that reasonable?

Do you still do this?

Currently, we still use the first way. But we now ignore the room which 
host skb_reserve() required when device is doing zero-copy. Now we don't 
taint guest virtio-net driver with a new method by this way.

 Two: Modify driver to get user buffer allocated from a page constructor
  API(to substitute alloc_page()), the user buffer are used as payload
  buffers and filled by h/w directly when packet is received. Driver
  should associate the pages with skb (skb_shinfo(skb)-frags). For 
  the head buffer side, let host allocates skb, and h/w fills it. 
  After that, the data filled in host skb header will be copied into
  guest header buffer which is submitted together with the payload buffer.
 
  Pros:   We could less care the way how guest or host allocates their
  buffers.
  Cons:   We still need a bit copy here for the skb header.
 
 We are not sure which way is the better here. This is the first thing we want
 to get comments from the community. We wish the modification to the network
 part will be generic which not used by vhost-net backend only, but a user
 application may use it as well when the zero-copy device may provides async
 read/write operations later.

I commented on this in the past. Do you still want comments?

Now we continue with the first way and try to push it. But any comments about 
the two methods are still welcome.

That's nice. The thing to do is probably to enable GSO/TSO
and see what we get this way. Also, mergeable buffer support
was recently posted and I hope to merge it for 2.6.35.
You might want to take a look.

I'm looking at the mergeable buffer. I think GSO/GRO support with zero-copy 
also needs it.
Currently, GSO/TSO is still not supported by vhost-net?
-- 
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Qemu-KVM 0.12.3 and Multipath - Assertion

2010-05-08 Thread André Weidemann

Hi Kevin,
On 04.05.2010 14:20, Kevin Wolf wrote:


Am 04.05.2010 13:38, schrieb Peter Lieven:

hi kevin,

i set a breakpint at bmdma_active_if. the first 2 breaks encountered
when the last path in the multipath
failed, but the assertion was not true.
when i kicked one path back in the breakpoint was reached again, this
time leading to an assert.
the stacktrace is from the point shortly before.

hope this helps.


Hm, looks like there's something wrong with cancelling requests -
bdrv_aio_cancel might decide that it completes a request (and
consequently calls the callback for it) whereas the IDE emulation
decides that it's done with the request before calling bdrv_aio_cancel.

I haven't looked in much detail what this could break, but does
something like this help?


Your attached patch fixes the problem I had as well. I ran 3 consecutive 
tests tonight, which all finished without crashing the VM.
I reported my assertion failed error on March 14th while doing disk 
perfomance tests using iozone in an Ubuntu 9.10 VM with qemu-kvm 0.12.3.


Thank you very much.
 André
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: EHCI / USB2.0 for USB passthrough, or how to pass USB host device

2010-05-08 Thread Jan Kiszka
Tom Lanyon wrote:
 Hi list,
 
 I've been playing with some KVM guests on KVM 83 on a RedHat 2.6.18 kernel 
 (2.6.18-164.15.1.el5).
 
 I tried to pass through a USB TV tuner device with a hostdev option in the 
 guest's configuration. The guest can see the device but the driver 
 (dvb_usb_dib0700) refuses to initialise it since it detected QEMU emulating a 
 USB 1.1 host and needs USB 2.0:
   
   dvb-usb: This USB2.0 device cannot be run on a USB1.1 port. (it lacks a 
 hardware PID filter)
 
 Instead, and as this is the only USB device on the host, I tried to pass 
 through the whole USB host controller to the guest via PCI pass through.
 
 There's three functions provided by the USB controller's PCI device:
   01:08.0 USB Controller: VIA Technologies, Inc. VT82x UHCI USB 1.1 
 Controller (rev 62)
   01:08.1 USB Controller: VIA Technologies, Inc. VT82x UHCI USB 1.1 
 Controller (rev 62)
   01:08.2 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 65)
 
 so I tried to pass the USB 2.0 (01:08.2) function to the guest but received 
 an error when trying to start the guest:
 
   error: this function is not supported by the hypervisor: No PCI reset 
 capability available for :01:08.2
 
 I figured this was because I was only trying to pass one function of a 
 multi-function device, so I tried passing all three functions concurrently 
 but received the same 'PCI reset capability' error.
 
 So, is there a way to emulate a USB 2.0 / EHCI controller in a guest and pass 
 my USB device through? or alternatively, can anyone suggest how to get the 
 PCI device(s) passed through for the physical USB controller?

There is ongoing work by David Ahern et al. to add EHCI emulation. I'm
hosting the QEMU tree that carries the patches:

git://git.kiszka.org/qemu.git ehci

As KVM is usable in upstream QEMU, you should be able to test it this
way (append -enable-kvm). Feedback welcome (to qemu-devel please).

Jan



signature.asc
Description: OpenPGP digital signature


[KVM_AUTOTEST][PATCH] KSM_overcommit: dynamic reserve calculation (2)

2010-05-08 Thread Lukas Doktor
Hi,

thanks for nice page about git workflow. I always wanted to try it but never 
had the time to sit down and learn...

Booth the TMPFS and 0.055 guest_reserve constant are set empirically using 
various RHEL and Fedora guest/hosts. Smaller hosts can work with smaller 
(0.045) constant but we didn't want to make the code more complex to exactly 
fit the limits.

Original changelog:
* NEW: guest_reserve and host_reserve are now calculated based on used memory
* NEW: tmpfs reserve is also evaluated to fit the overhead
* NEW: VM alive check during split_guest()
* FIX: In function split_guest() we used incorrect session
* MOD: Increase number of VNC ports

Best regards,
Lukas


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/4] Increase maximum number of VNC ports

2010-05-08 Thread Lukas Doktor
Signed-off-by: Lukas Doktor ldok...@redhat.com
---
 client/tests/kvm/kvm_vm.py |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index d1e0246..c203e14 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -396,7 +396,7 @@ class VM:
 
 # Find available VNC port, if needed
 if params.get(display) == vnc:
-self.vnc_port = kvm_utils.find_free_port(5900, 6000)
+self.vnc_port = kvm_utils.find_free_port(5900, 6100)
 
 # Find random UUID if specified 'uuid = random' in config file
 if params.get(uuid) == random:
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/4] Add VMs alive check while spliting the guest's pages

2010-05-08 Thread Lukas Doktor
Signed-off-by: Lukas Doktor ldok...@redhat.com
---
 client/tests/kvm/tests/ksm_overcommit.py |   10 ++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/tests/ksm_overcommit.py 
b/client/tests/kvm/tests/ksm_overcommit.py
index 4aa6deb..b3d6880 100644
--- a/client/tests/kvm/tests/ksm_overcommit.py
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -142,6 +142,12 @@ def run_ksm_overcommit(test, params, env):
 session = None
 vm = None
 for i in range(1, vmsc):
+# Check VMs
+for j in range(0, vmsc):
+if not lvms[j].is_alive:
+e_msg = VM %d died while executing static_random_fill in\
+ VM %d on allocator loop % (j, i)
+raise error.TestFail(e_msg)
 vm = lvms[i]
 session = lsessions[i]
 a_cmd = mem.static_random_fill()
@@ -154,6 +160,10 @@ def run_ksm_overcommit(test, params, env):
 logging.debug(Watching host memory while filling vm %s 
memory,
   vm.name)
 while not out.startswith(PASS) and not 
out.startswith(FAIL):
+if not vm.is_alive():
+e_msg = VM %d died while executing 
static_random_fill\
+ on allocator loop % i
+raise error.TestFail(e_msg)
 free_mem = int(utils.read_from_meminfo(MemFree))
 if (ksm_swap):
 free_mem = (free_mem +
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/4] FIX: Incorrect session in function split_guest()

2010-05-08 Thread Lukas Doktor
Signed-off-by: Jiri Zupka jzu...@redhat.com
---
 client/tests/kvm/tests/ksm_overcommit.py |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests/ksm_overcommit.py 
b/client/tests/kvm/tests/ksm_overcommit.py
index b3d6880..24d6643 100644
--- a/client/tests/kvm/tests/ksm_overcommit.py
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -212,7 +212,7 @@ def run_ksm_overcommit(test, params, env):
 
 # Verify last machine with randomly generated memory
 a_cmd = mem.static_random_verify()
-_execute_allocator(a_cmd, lvms[last_vm], session,
+_execute_allocator(a_cmd, lvms[last_vm], lsessions[last_vm],
(mem / 200 * 50 * perf_ratio))
 logging.debug(kvm_test_utils.get_memory_info([lvms[last_vm]]))
 
-- 
1.6.2.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] guest_reserve, host_reserve and tmpfs overhead automatic calculation based on used memory

2010-05-08 Thread Lukas Doktor
Signed-off-by: Lukas Doktor ldok...@redhat.com
Signed-off-by: Jiri Zupka jzu...@redhat.com
---
 client/tests/kvm/scripts/allocator.py|   11 +---
 client/tests/kvm/tests/ksm_overcommit.py |   41 +++---
 client/tests/kvm/tests_base.cfg.sample   |6 ++--
 3 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/client/tests/kvm/scripts/allocator.py 
b/client/tests/kvm/scripts/allocator.py
index 1036893..227745a 100755
--- a/client/tests/kvm/scripts/allocator.py
+++ b/client/tests/kvm/scripts/allocator.py
@@ -8,10 +8,12 @@ Auxiliary script used to allocate memory on guests.
 
 
 
-import os, array, sys, struct, random, copy, inspect, tempfile, datetime
+import os, array, sys, struct, random, copy, inspect, tempfile, datetime, math
 
 PAGE_SIZE = 4096 # machine page size
 
+TMPFS_OVERHEAD = 0.0022 # overhead on 1MB of write data 
+
 
 class MemFill(object):
 
@@ -32,7 +34,8 @@ class MemFill(object):
 
 self.tmpdp = tempfile.mkdtemp()
 ret_code = os.system(mount -o size=%dM tmpfs %s -t tmpfs %
- ((mem + 25), self.tmpdp))
+ ((mem+math.ceil(mem*TMPFS_OVERHEAD)), 
+ self.tmpdp))
 if ret_code != 0:
 if os.getuid() != 0:
 print (FAIL: Unable to mount tmpfs 
@@ -42,7 +45,7 @@ class MemFill(object):
 else:
 self.f = tempfile.TemporaryFile(prefix='mem', dir=self.tmpdp)
 self.allocate_by = 'L'
-self.npages = (mem * 1024 * 1024) / PAGE_SIZE
+self.npages = ((mem * 1024 * 1024) / PAGE_SIZE)
 self.random_key = random_key
 self.static_value = static_value
 print PASS: Initialization
@@ -83,7 +86,7 @@ class MemFill(object):
 @return: return array of bytes size PAGE_SIZE.
 
 a = array.array(B)
-for i in range(PAGE_SIZE / a.itemsize):
+for i in range((PAGE_SIZE / a.itemsize)):
 try:
 a.append(value)
 except:
diff --git a/client/tests/kvm/tests/ksm_overcommit.py 
b/client/tests/kvm/tests/ksm_overcommit.py
index 24d6643..8dc1722 100644
--- a/client/tests/kvm/tests/ksm_overcommit.py
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -348,12 +348,29 @@ def run_ksm_overcommit(test, params, env):
 
 # Main test code
 logging.info(Starting phase 0: Initialization)
+
 # host_reserve: mem reserve kept for the host system to run
-host_reserve = int(params.get(ksm_host_reserve, 512))
+host_reserve = int(params.get(ksm_host_reserve, -1))
+if (host_reserve == -1):
+# default host_reserve = MemAvailable + one_minimal_guest(128MB)
+# later we add 64MB per additional guest
+host_reserve = ((utils.memtotal() - utils.read_from_meminfo(MemFree))
+/ 1024 + 128)
+# using default reserve
+_host_reserve = True
+else:
+_host_reserve = False
+
 # guest_reserve: mem reserve kept to avoid guest OS to kill processes
-guest_reserve = int(params.get(ksm_guest_reserve, 1024))
-logging.debug(Memory reserved for host to run: %d, host_reserve)
-logging.debug(Memory reserved for guest to run: %d, guest_reserve)
+guest_reserve = int(params.get(ksm_guest_reserve, -1))
+if (guest_reserve == -1):
+# default guest_reserve = minimal_system_mem(256MB)
+# later we add tmpfs overhead
+guest_reserve = 256
+# using default reserve
+_guest_reserve = True
+else:
+_guest_reserve = False
 
 max_vms = int(params.get(max_vms, 2))
 overcommit = float(params.get(ksm_overcommit_ratio, 2.0))
@@ -365,6 +382,10 @@ def run_ksm_overcommit(test, params, env):
 
 if (params['ksm_mode'] == serial):
 max_alloc = vmsc
+if _host_reserve:
+# First round of additional guest reserves
+host_reserve += vmsc * 64
+_host_reserve = vmsc
 
 host_mem = (int(utils.memtotal()) / 1024 - host_reserve)
 
@@ -412,6 +433,10 @@ def run_ksm_overcommit(test, params, env):
 if mem - guest_reserve - 1  3100:
 vmsc = int(math.ceil((host_mem * overcommit) /
  (3100 + guest_reserve)))
+if _host_reserve:
+host_reserve += (vmsc - _host_reserve) * 64
+host_mem -= (vmsc - _host_reserve) * 64
+_host_reserve = vmsc
 mem = int(math.floor(host_mem * overcommit / vmsc))
 
 if os.popen(uname -i).readline().startswith(i386):
@@ -420,8 +445,16 @@ def run_ksm_overcommit(test, params, env):
 if mem  3100 - 64:
 vmsc = int(math.ceil((host_mem * overcommit) /
  (3100 - 64.0)))
+if _host_reserve:
+host_reserve += (vmsc - _host_reserve) * 64
+host_mem -= (vmsc -