Re: limit conectivity of a VM

2010-11-20 Thread Thomas Mueller
Am Fri, 19 Nov 2010 23:17:42 +0330 schrieb hadi golestani:

 Hello,
 I need to limit the port speed of a VM to 10 mbps ( or 5 mbps if it's
 possible). What's the way of doing so?
 
 Regards

maybe one of the virtual network cards is 10mbit? start kvm with -net 
nic,model=? to get a list.


- Thomas

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] vapic: fix vapic option rom sizing

2010-11-20 Thread Avi Kivity
An option rom is required to be aligned to 512 bytes, and to have some
space for a signature.

Signed-off-by: Avi Kivity a...@redhat.com
---
 pc-bios/optionrom/vapic.S |4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/pc-bios/optionrom/vapic.S b/pc-bios/optionrom/vapic.S
index 3c8dcf1..97fa19c 100644
--- a/pc-bios/optionrom/vapic.S
+++ b/pc-bios/optionrom/vapic.S
@@ -323,4 +323,8 @@ up_set_tpr_poll_irq:
 
 vapic:
 . = . + vapic_size
+
+.byte 0  # reserve space for signature
+.align 512, 0
+
 _end:
-- 
1.7.3.1

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limit conectivity of a VM

2010-11-20 Thread Javier Guerra Giraldez
On Sat, Nov 20, 2010 at 3:40 AM, Thomas Mueller tho...@chaschperli.ch wrote:
 maybe one of the virtual network cards is 10mbit? start kvm with -net
 nic,model=? to get a list.

wouldn't matter.   different models emulate the hardware registers
used to transmit, not the performance.

if you had infinitely fast processors, every virtual network would be
infinitely fast.

-- 
Javier
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limit conectivity of a VM

2010-11-20 Thread linux_kvm
 if you had infinitely fast processors, every virtual network would be
 infinitely fast.

I see on a Vyatta VM, that an interface's link speed attribute can be
explicitly defined, along with duplex.

Possible values are 10 100  1000 Mb, and are configured independently
of the driver/model of NIC.

I haven't tested it yet, and since discovering this detail, have been
somewhat disheartened at the thought of ~8 Gb vhost throughput being
throttled by the highest possible link speed setting being 1000 Mb.

So maybe plan b could be to install a test router just for that
function, and loop through it.



-C



On Sat, 20 Nov 2010 10:39 -0500, Javier Guerra Giraldez
jav...@guerrag.com wrote:
 On Sat, Nov 20, 2010 at 3:40 AM, Thomas Mueller tho...@chaschperli.ch
 wrote:
  maybe one of the virtual network cards is 10mbit? start kvm with -net
  nic,model=? to get a list.
 
 wouldn't matter.   different models emulate the hardware registers
 used to transmit, not the performance.
 
 if you had infinitely fast processors, every virtual network would be
 infinitely fast.
 
 -- 
 Javier
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/1] Initial built-in iSCSI support

2010-11-20 Thread ronnie sahlberg
Resending since the mail might have been lost...

List,


Please find attached a patch that add an initial iSCSI client library
and support to kvm-qemu.
This allows to use iSCSI devices without making them visible to the
host (and pollute the page cahce on the host)
and allows kvm-qemu to access the devices directly.


This works with iscsi resources, both disks and cdrom, make available
through TGTD.

See this as experimental at this stage.
This is a very early implementation but id does works with TGTD.
Significant effort is required to add additional features to ensure
full interoperability with other initiators.


The syntax to describe an iscsi lun is :
   iscsi://host[:port]/target-name/lun

Example :
  -drive file=iscsi://127.0.0.1:3260/iqn.ronnie.test/1
or
  -cdrom iscsi://127.0.0.1:3260/iqn.ronnie.test/2


Future work needed :
 - document this in the manpage
 - implement target initiated NOP
 - implement uni- and bi-directional chap
 - implement a lot of missing iscsi functionality
...
 - maybe get it hooked into bus=scsi  just like it today can hook into
/dev/sg* devices and send CDBs directly to the LUN,
  that part of the code could be enhanced to also provide straight
passthrough SCSI to these iscsi-luns too.


My long term aim is to convert the ./block/iscsi/ subdirectory into a
standalone general purpose iscsi client library
why I would want to avoid implementing qemu specific code inside this directory!


Please comment and/or apply.



Many thanks to anthony l and stefan h that did an initial review so I
could eliminate the most obvious flaws and mistakes in the interfacing
code.
Thanks!.


Now since this is open source, there must be other people out there
interested in iscsi, so lets work together to build a mature and
complete iscsi library for kvm-qemu and others.


Have fun,
ronnie sahlberg


0001-initial-iscsi-support-for-kvm-qemu.patch.gz
Description: GNU Zip compressed data


Re: seabios 0.6.1 regression

2010-11-20 Thread Avi Kivity

On 11/16/2010 03:19 PM, Alexander Graf wrote:



  Rewriting it to use inb / stos works (jecxz ; insb; loop doesn't) so it 
looks like a kernel bug in insb emulation.


  Turns out is was a subtle bug in the tpr optimization we do for Windows XP.  
The problem happens when we load the vapic option rom from the firmware config 
interface.  With inb / movb, writing the vapic area happens in guest context, 
which the kernel is prepared to handle.  With insb, the write happens from kvm, 
which is then undone on the next entry, leading to the tpr being set to a high 
value.

Shouldn't the vapic area be mapped in on demand? Then we could map it on option 
rom init time and everyone's happy.


Mapped in? It's an option rom.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2010-11-20 Thread satimis

http://www.streetperformanceteam.ch/important.php
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 0/1] [PULL] qemu-kvm.git uq/master queue

2010-11-20 Thread Anthony Liguori

On 11/05/2010 07:44 PM, Marcelo Tosatti wrote:

The following changes since commit d33ea50a958b2e050d2b28e5f17e3b55e91c6d74:

   scsi-disk: Fix immediate failure of bdrv_aio_* (2010-11-04 13:54:37 +0100)
   


Pulled.  Thanks.

Regards,

Anthony Liguori


are available in the git repository at:
   git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master

Gleb Natapov (1):
   Add support for async page fault to qemu

  target-i386/cpu.h |1 +
  target-i386/cpuid.c   |2 +-
  target-i386/kvm.c |   14 ++
  target-i386/machine.c |   26 ++
  4 files changed, 42 insertions(+), 1 deletions(-)


   


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM with hugepages generate huge load with two guests

2010-11-20 Thread Dmitry Golubev
Hi,

Seems that nobody is interested in this bug :(

Anyway I wanted to add a bit more to this investigation.

Once I put nohz=off highres=off clocksource=acpi_pm in guest kernel
options, the guests started to behave better - they do not stay in the
slow state, but rather get there for some seconds (usually up to
minute, but sometimes 2-3 minutes) and then get out of it (this cycle
repeats once in a while - every approx 3-6 minutes). Once the
situation became stable, so that I am able to leave the guests without
very much worries, I also noticed that sometimes the predicted
swapping occurs, although rarely (I waited about half an hour to catch
the first swapping on the host). Here is a fragment of vmstat. Note
that when the first column shows 8-9 - the slowness and huge load
happens. You can also see how is appears and disappears (with nohz and
kvm-clock it did not go out of slowness period, but with tsc clock the
probability of getting out is significantly lower):

procs ---memory-- ---swap-- -io -system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
 8  0  0  60456  19708 25368800 6   170 5771 1712 97  3  0  0
 9  5  0  58752  19708 253688001157 6457 1500 96  4  0  0
 8  0  0  58192  19708 2536880055   106 5112 1588 98  3  0  0
 8  0  0  58068  19708 2536880021 0 2609 1498 100  0  0  0
 8  2  0  57728  19708 25368800 996 2645 1620 100  0  0  0
 8  0  0  53852  19716 25368000 2   186 6321 1935 97  4  0  0
 8  0  0  49636  19716 25368800 045 3482 1484 99  1  0  0
 8  0  0  49452  19716 25368800 034 3253 1851 100  0  0  0
 4  1   1468 126252  16780 182256   53  317   393   788 29318 3498 79 21  0  0
 4  0   1468 135596  16780 18233200 7   360 26782 2459 79 21  0  0
 1  0   1468 169720  16780 182340007581 22024 3194 40 15 42  3
 3  0   1464 167608  16780 1823406026  1579 9404 5526 22  8 35 35
 0  0   1460 164232  16780 1825040085   170 4955 3345 21  5 69  5
 0  0   1460 163636  16780 18250400 090 1288 1855  5  2 90  3
 1  0   1460 164836  16780 18250400 034 1166 1789  4  2 93  1
 1  0   1452 165628  16780 18250400   28570 1981 2692 10  2 83  4
 1  0   1452 160044  16952 18484060   832   146 5046 3303 11  6 76  7
 1  0   1452 161416  16960 1848400019   170 1732 2577 10  2 74 13
 0  1   1452 161920  16960 18484000   11153 1084 1986  0  1 96  3
 0  0   1452 161332  16960 18484000   25434  856 1505  2  1 95  3
 1  0   1452 159168  16960 18484000   36646 2137 2774  3  2 94  1
 1  0   1452 157408  16968 18484000 069 2423 2991  9  5 84  2
 0  0   1444 157876  16968 18484000 045 6343 3079 24 10 65  1
 0  0   1428 159644  16968 18484460 852  724 1276  0  0 98  2
 0  0   1428 160336  16968 184844003198 1115 1835  1  1 92  6
 1  0   1428 161360  16968 18484400 045 1333 1849  2  1 95  2
 0  0   1428 162092  16968 18484400 0   408 3517 4267 11  2 78  8
 1  1   1428 163868  16968 1848440024   121 1714 2036 10  2 86  2
 1  3   1428 161292  16968 18484400 3   143 2906 3503 16  4 77  3
 0  0   1428 156448  16976 18483600 1   781 5661 4464 16  7 74  3
 1  0   1428 156924  16976 18484400   58892 2341 3845  7  2 87  4
 0  0   1428 158816  16976 1848440027   119 2052 3830  5  1 89  4
 0  0   1428 161420  16976 18484400 156 3923 3132 26  4 68  1
 0  0   1428 162724  16976 1848440010   107 2806 3558 10  2 86  2
 1  0   1428 165244  16976 1848440034   155 2084 2469  8  2 78 12
 0  0   1428 165204  16976 18484400   390   282 9568 4924 17 11 55 17
 1  0   1392 163864  16976 185064  1020   218   411 11762 16591  6  9 68 17
 8  0   1384 164992  16984 18505600 988 7540 5761 73  6 17  4
 8  0   1384 163620  16984 18507600 189 21936 45040 90 10  0  0
 8  0   1384 165324  16992 18507600 5   194 3330 1678 99  1  0  0
 8  0   1384 165704  16992 18507600 154 2651 1457 99  1  0  0
procs ---memory-- ---swap-- -io -system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
 8  0   1384 163016  17000 18507600 0   126 4988 1536 97  3  0  0
 9  1   1384 162608  17000 1850760034   477 20106 2351 83 17  0  0
 0  0   1384 184052  17000 18507600   102  1198 48951 3628 48 38  6  8
 0  0   1384 183088  17008 18507600 8   156 1228 1419  2  2 82 14
 0  0   1384 184436  17008 1851640028   113 3176 2785 12  7 75  6
 0  0   1384 184568  17008 1851640030   107 1547 1821  4  3 87  6
 4  2   1228 228808  17008 

Re: [PATCH 1/1] Initial built-in iSCSI support

2010-11-20 Thread Alexander Graf

On 20.11.2010, at 20:52, ronnie sahlberg wrote:

 Resending since the mail might have been lost...
 
 List,
 
 
 Please find attached a patch that add an initial iSCSI client library
 and support to kvm-qemu.
 This allows to use iSCSI devices without making them visible to the
 host (and pollute the page cahce on the host)
 and allows kvm-qemu to access the devices directly.

Please resend with an in-line patch to qemu-de...@nongnu.org. KVM is really the 
wrong mailing list for this, as it only deals with KVM specifics and this falls 
into generic qemu code.

Also, I'd recommend CC'ing kw...@redhat.com, as he is the block layer 
maintainer and this is a block driver. If you want thorough review of the iSCSI 
parts, I'd also recommend to CC h...@suse.de and n...@linux-iscsi.org.


 
 
 This works with iscsi resources, both disks and cdrom, make available
 through TGTD.
 
 See this as experimental at this stage.
 This is a very early implementation but id does works with TGTD.
 Significant effort is required to add additional features to ensure
 full interoperability with other initiators.
 
 
 The syntax to describe an iscsi lun is :
   iscsi://host[:port]/target-name/lun
 
 Example :
  -drive file=iscsi://127.0.0.1:3260/iqn.ronnie.test/1
 or
  -cdrom iscsi://127.0.0.1:3260/iqn.ronnie.test/2
 
 
 Future work needed :
 - document this in the manpage
 - implement target initiated NOP
 - implement uni- and bi-directional chap
 - implement a lot of missing iscsi functionality
 ...
 - maybe get it hooked into bus=scsi  just like it today can hook into
 /dev/sg* devices and send CDBs directly to the LUN,
  that part of the code could be enhanced to also provide straight
 passthrough SCSI to these iscsi-luns too.

The whole passthrough thing should be a property of the block layer, yes. 
Multiple backends potentially have the chance of sending direct scsi commands 
somewhere and should be able to decide on that themselves.


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html