On (Thu) Dec 03 2009 [09:13:25], Amit Shah wrote:
> On (Thu) Dec 03 2009 [09:24:23], Rusty Russell wrote:
> > On Wed, 2 Dec 2009 07:54:06 pm Amit Shah wrote:
> > > On (Wed) Dec 02 2009 [14:14:20], Rusty Russell wrote:
> > > > On Sat, 28 Nov 2009 05:20:35 pm Amit Shah wrote:
> > > > > The console could be flooded with data from the host; handle
> > > > > this situation by buffering the data.
> > > > 
> > > > All this complexity makes me really wonder if we should just
> > > > have the host say the max # ports it will ever use, and just do this
> > > > really dumbly.  Yes, it's a limitation, but it'd be much simpler.
> > > 
> > > As in make sure the max nr ports is less than 255 and have per-port vqs?
> > > And then the buffering will be done inside the vqs themselves?
> > 
> > Well < 128 (two vqs per port).  The config would say (with a feature bit)
> > how many vq pairs there are.
> 
> Sure. This was how the previous versions behaved as well.

I forgot one detail:

http://www.mail-archive.com/virtualization@lists.linux-foundation.org/msg06079.html

Some API changes are needed to pre-declare the number of vqs and the
selectively enable them as ports get added.

How I think this could work is:

<device drv>

probe:
- get max_nr_ports from config_get
- declare the intent to use max_nr_ports * 2 vqs with callbacks
  associated with half of them (so this allows us to have 512 vqs with 256
  callbacks with the x86 MSI limit.

<virtio_pci>:
request_msix_vectors for all max_nr_ports vectors

new functions to enable / disable vqs

                Amit
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Reply via email to