suggested vhost link speed settings

2010-11-27 Thread linux_kvm
Hi list,

Being that the virtio interfaces are stated as acheiving 5-8 Gb
throughput now with vhost, as opposed to 1Gb without, how should their
link speed be defined when the choices are 2500M or 1M?

I have them plotted out to make a 10Gb bond out of a pair, counting on
5Gb max each, which I'm imagining can be acheived without concern based
on what I read.

If I set them to 'auto-negotiate' will it internally flap between 
cause undesirable consequences?
I don't want to set them at 2500 in case they ever do need to reach
closer to 5 each.

I'm not at a point where I can test anything yet, I'm just planning 
preconfiguring so far.

For the sake of preventing issues down the line I wanted to see if
there's a consensus or standard for this scenario and be as sure as
possible ahead of time.

Thanks,

-C
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] GPGPU Support In KVM

2010-11-26 Thread linux_kvm
Interesting, indeed. Looking forward to it as well.


On Wed, 24 Nov 2010 20:56 +0100, André Weidemann
andre.weidem...@web.de wrote:
 Hi,
 On 24.11.2010 15:06, Prasad Joshi wrote:
  I have been following the KVM mailing list last few months and have learned 
  that the KVM does not have the GPU pass-through support. As far as I can 
  understand, adding GPU pass-through would make GPU device available to VM 
  as a Graphics Card, let me know if I am wrong. After the completion GPGPU 
  support in VM I would love to work on this support as well.
 
  Please let me know your thoughts.
 
 There is already someone who was working on GPU pass-trough for KVM. 
 Search the mailing list archives for _Fede_. Back in June he said that 
 he would need to debug a BIOS issue in order to make it work.
 Maybe you two should get together.
 I hope this helps. I am really looking forward to see this in KVM.
 
 Regards
   André
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limit conectivity of a VM

2010-11-20 Thread linux_kvm
 if you had infinitely fast processors, every virtual network would be
 infinitely fast.

I see on a Vyatta VM, that an interface's link speed attribute can be
explicitly defined, along with duplex.

Possible values are 10 100  1000 Mb, and are configured independently
of the driver/model of NIC.

I haven't tested it yet, and since discovering this detail, have been
somewhat disheartened at the thought of ~8 Gb vhost throughput being
throttled by the highest possible link speed setting being 1000 Mb.

So maybe plan b could be to install a test router just for that
function, and loop through it.



-C



On Sat, 20 Nov 2010 10:39 -0500, Javier Guerra Giraldez
jav...@guerrag.com wrote:
 On Sat, Nov 20, 2010 at 3:40 AM, Thomas Mueller tho...@chaschperli.ch
 wrote:
  maybe one of the virtual network cards is 10mbit? start kvm with -net
  nic,model=? to get a list.
 
 wouldn't matter.   different models emulate the hardware registers
 used to transmit, not the performance.
 
 if you had infinitely fast processors, every virtual network would be
 infinitely fast.
 
 -- 
 Javier
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


New @ Proxmox: -device, vhost... Docs, notes?

2010-11-13 Thread linux_kvm
Hi Everyone,

I'm impressed with all the activity I see here since joining the list
this year.
It helps to reinforce that I chose the right technology. Thanks.



The -device method  vhost=on option recently became available to us at
the ProxmoxVE project  I'm preparing to start making use of them this
coming week.

I found some docs on linux-kvm.org, and still have a bit to look through
 see what I can find.

I don't want to state on the record that I will or I won't but it's
crossed my mind to assemble something for our wiki, and if I do decide
to it would be nice to have some fresh material.
So I'm just writing to see if anyone would like to share any of your
favorite reference material- ie. links, benchmarks, drawings, diagrams,
etc.


Kind Regards,

-C
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v3 RFC PATCH 0/4] Implement multiqueue virtio-net

2010-10-29 Thread linux_kvm
On Fri, 29 Oct 2010 13:26 +0200, Michael S. Tsirkin m...@redhat.com
wrote:
 On Thu, Oct 28, 2010 at 12:48:57PM +0530, Krishna Kumar2 wrote:
   Krishna Kumar2/India/IBM wrote on 10/28/2010 10:44:14 AM:
 In practice users are very unlikely to pin threads to CPUs.

I may be misunderstanding what you're referring to. It caught my
attention since I'm working on a configuration to do what you say is
unlikely, so I'll chime in for what it's worth.

An option in Vyatta allows assigning CPU affinity to network adapters,
since apparently seperate L2 caches can have a significant impact on
throughput.

Although much of their focus seems to be on commercial virtualization
platforms, I do see quite a few forum posts with regard to KVM.
Mabye this still qualifies as an edge case, but as for virtualized
routing theirs seems to offer the most functionality.

http://www.vyatta.org/forum/viewtopic.php?t=2697

-cb
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Hitting 29 NIC limit (+Intel VT-c)

2010-10-28 Thread linux_kvm
On Thu, 14 Oct 2010 14:07 +0200, Avi Kivity a...@redhat.com wrote:
   On 10/14/2010 12:54 AM, Anthony Liguori wrote:
  On 10/13/2010 05:32 PM, Anjali Kulkarni wrote:

 What's the motivation for such a huge number of interfaces?

Ultimately to bring multiple 10Gb bonds into a Vyatta guest.

---

 BTW, I don't think it's possible to hot-add physical functions.  I 
 believe I know of a card that supports dynamic add of physical functions 
 (pre-dating SR-IOV)

I don't know what you're talking about, but it seems you have a better
handle than I on this VT-c stuff, so perhaps misguidedly I'll direct my
next question to you.

Is additional configuration required to make use of SR-IOV  VTq?
I don't immediateley understand how the queueing knows who is who in the
absense of eth.vlan- or if I need to for that matter.

My hope is that this is something like plug n play as long as kernel,
host  driver versions are foo, but I haven't yet found documentation
to confirm it.

For the sake of future queries, I've come across these references so
far:

http://download.intel.com/design/network/applnots/321211.pdf
http://www.linux-kvm.org/wiki/images/6/6a/KvmForum2008%24kdf2008_7.pdf
http://www.mail-archive.com/kvm@vger.kernel.org/msg27860.html
http://www.mail-archive.com/kvm@vger.kernel.org/msg22721.html
http://thread.gmane.org/gmane.linux.kernel.mm/38508
http://ark.intel.com/Product.aspx?id=36918
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NIC limit

2010-10-07 Thread linux_kvm
The PCI bus has only 32 slots (devices), 3 taken by chipset + vga, and
a 4th if you have, for example, a virtio disk.  Are you sure these are
33 PCI devices and not 33 PCI functions?

No, not sure.
Apparently my statement was based on an uninformed assumption.

I tested using a VM that had 30 (removable-per-web-interface)
attachments, and added 3x IDE HDDs to bring it above what I thought was
32 devices:
28 virtio NICs
 1 IDE CD-ROM
 1 virtio HDD
+3 IDE HDDs

I could add IDE up past 32 and it would start, as soon as there were
more than 28 NICs with or without the 3 IDE HDDs, start would fail.


On Wed, 06 Oct 2010 10:18 -0700, Chris Wright chr...@sous-sol.org
wrote:
 * linux_...@proinbox.com (linux_...@proinbox.com) wrote:
  Hi again everybody,
   
  One of the admins at the ProxmoxVE project was gracious enough to
  quickly release a package including the previously discussed change to
  allow up to 32 NICs in qemu.
 
 You mean they patched qemu to increase the MAX_NICS constant?  Nice to
 get the quick turn around.
 
 Te better choice is to use a newer command line.  Not only does it avoid
 the MAX_NICS limitation, but it also enables standard virtio-net offload
 accelerations.
 
  For future reference the .deb is here:
  ftp://download.proxmox.com/debian/dists/lenny/pvetest/binary-amd64/pve-qemu-kvm_0.12.5-2_amd64.deb
   
  Upon creating  running the VM with the newly patched qemu-kvm app
  installed, I found a NIC limitation remained in place, presumably
  imposed by some other aspect of the environment.
   
  The machine would start when it had 33 PCI devices, as long as no more
  than 28 of them were NICs.
 
 The PCI bus has only 32 slots (devices), 3 taken by chipset + vga, and
 a 4th if you have, for example, a virtio disk.  Are you sure these are
 33 PCI devices and not 33 PCI functions?
 
 thanks,
 -chris
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


NIC limit

2010-10-06 Thread linux_kvm
Hi again everybody,
 
One of the admins at the ProxmoxVE project was gracious enough to
quickly release a package including the previously discussed change to
allow up to 32 NICs in qemu.
 
For future reference the .deb is here:
ftp://download.proxmox.com/debian/dists/lenny/pvetest/binary-amd64/pve-qemu-kvm_0.12.5-2_amd64.deb
 
Upon creating  running the VM with the newly patched qemu-kvm app
installed, I found a NIC limitation remained in place, presumably
imposed by some other aspect of the environment.
 
The machine would start when it had 33 PCI devices, as long as no more
than 28 of them were NICs.
 
This is still a vast improvement compared to the previous limit of 8
NICs, and is very good news for my project. I post here in hopes that
maybe someone will come across the link in a search and have a solution.
 
More likely however the new API will be in place and widely in use by
then, but whatever.
 
Either way, thanks for your help yesterday.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 8 NIC limit - patch - places limit at 32

2010-10-06 Thread linux_kvm
It's 8 otherwise- and after the patch is applied, it still only goes to
28 for some reason.
28's acceptable for my needs, so I'll step aside from here  leave it to
the experts.

As for the new -device method, that's all fine  good but AFAIK it's not
implemented on my platform, so this was the answer.

On Wed, 06 Oct 2010 07:54 -0500, Anthony Liguori
anth...@codemonkey.ws wrote:
 On 10/06/2010 12:46 AM, linux_...@proinbox.com wrote:
  Attached is a patch that allows qemu to have up to 32 NICs, without
  using the qdev -device method.
 
 
 I'd rather there be no fixed limit and we validate that when add fails 
 because there isn't a TCP slot available, we do the right thing.
 
 BTW, using -device, it should be possible to add a very high number of 
 nics because you can specify the PCI address including a function.  If 
 this doesn't Just Work today, we should make it work.
 
 Regards,
 
 Anthony Liguori
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


8 NIC limit

2010-10-05 Thread linux_kvm
Hello list:

I'm working on a project that calls for the creation of a firewall in
KVM.
While adding a 20-interface trunk of virtio adapters to bring in a dual
10GB bond, I've discovered an 8 NIC limit in QEMU.

I found the following thread in the list archives detailing a similar
problem:
http://kerneltrap.org/mailarchive/linux-kvm/2009/1/29/4848304

It includes a patch for the file qemu/net.h to allow 24 NICs:
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm;qemu-kvm/+bug/595873/+attachment/1429544/+files/max_nics.patch

In my case I want to attach 29, and have simply changed line 8 to 30
from 24.

This will be the first patch I've ever had to do, and so far my internet
search yields results that don't seem to apply.

Would someone like to recommend a pertinent tutorial?

Many thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: Re: 8 NIC limit

2010-10-05 Thread linux_kvm
Forgot to cc list, forwarding.

 In this case, I think you're going to want to send your patch to the
 qemu-devel (on CC) mailing list (perhaps in addition to sending it
 here, to the kvm list).

Will do, thanks for the pointer.

Before I do so, I'd like to bring up one thing that comes to mind.

I don't know how to make the determination, but it makes sense to me for
the limit defined here to be indicitive of an actual limitation, rather
than what seems an arbitrary best-guess as to the most someone might
need.

If the change ends up being permanent, then I would hope it would be a
large enough value to provide a degree of extensibility and prevent the
necessity of bumping it up again later when someone else comes along
with even greater bandwidth requirements.

Perhaps someone could provide some guidance as to a sane, higher number,
as opposed to an arbitrary '65000' which would surely prevent this from
happening again (knock on wood).

For the time being I still have to find something to help learn how to
implement the change locally.
I rarely have to compile let alone deal with patches, so to me at least
this is a considerable obstacle.

-Thanks

On Tue, 05 Oct 2010 08:24 -0700, Dustin Kirkland
kirkl...@canonical.com wrote:
 On Tue, Oct 5, 2010 at 7:48 AM,  linux_...@proinbox.com wrote:
  Hello list:
 
  I'm working on a project that calls for the creation of a firewall in
  KVM.
  While adding a 20-interface trunk of virtio adapters to bring in a dual
  10GB bond, I've discovered an 8 NIC limit in QEMU.
 
  I found the following thread in the list archives detailing a similar
  problem:
  http://kerneltrap.org/mailarchive/linux-kvm/2009/1/29/4848304
 
  It includes a patch for the file qemu/net.h to allow 24 NICs:
  https://bugs.launchpad.net/ubuntu/+source/qemu-kvm;qemu-kvm/+bug/595873/+attachment/1429544/+files/max_nics.patch
 
  In my case I want to attach 29, and have simply changed line 8 to 30
  from 24.
 
  This will be the first patch I've ever had to do, and so far my internet
  search yields results that don't seem to apply.
 
  Would someone like to recommend a pertinent tutorial?
 
 Hi there,
 
 I commented on the original bug in Launchpad.  We're willing and able
 to carry the patch against qemu-kvm in Ubuntu, I just asked that the
 reporter at least submit the patch upstream for discussion.  I don't
 see where that has happened yet.  It's a trivial patch to submit.
 Please note in that bug a pointer to the mailing list thread, if you
 start one.
 
 To your specific question, different communities have different
 requirements on patch submission, so you do need to consult each
 community.  A good place to start might be the
 Documentation/SubmittingPatches how-to in the kernel tree:
  *
  
 http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=Documentation/SubmittingPatches;hb=HEAD
 
 In this case, I think you're going to want to send your patch to the
 qemu-devel (on CC) mailing list (perhaps in addition to sending it
 here, to the kvm list).
 
 :-Dustin
 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 8 NIC limit

2010-10-05 Thread linux_kvm
 Have you tried creating NICs with -device?

I'm not sure what that is, will look into it, thanks.

I'm using ProxmoxVE, and currently add them via a web interface.

Someone happens to host a screenshot of that part here:
http://c-nergy.be/blog/wp-content/uploads/Proxmox_Net2.png

On Tue, 05 Oct 2010 17:57 +0200, Markus Armbruster arm...@redhat.com
wrote:
 linux_...@proinbox.com writes:
 
  Hello list:
 
  I'm working on a project that calls for the creation of a firewall in
  KVM.
  While adding a 20-interface trunk of virtio adapters to bring in a dual
  10GB bond, I've discovered an 8 NIC limit in QEMU.
 
 Have you tried creating NICs with -device?  The limit shouldn't apply
 there.
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 8 NIC limit - patch - places limit at 32

2010-10-05 Thread linux_kvm
Attached is a patch that allows qemu to have up to 32 NICs, without
using the qdev -device method.


max_nics.patch
Description: Binary data