Hi guys,

> -----Original Message-----
> From: snabb-de...@googlegroups.com [mailto:snabb-
> de...@googlegroups.com] On Behalf Of Michael S. Tsirkin
> Sent: Monday, April 6, 2015 11:07 PM
> To: Nikolay Nikolaev
> Cc: Long, Thomas; snabb-de...@googlegroups.com; ebl...@redhat.com;
> qemu-devel@nongnu.org; t...@virtualopensystems.com
> Subject: [snabb-devel] Re: [PATCH v2] vhost-user: add multi queue support
> 
> On Sat, Jan 24, 2015 at 02:22:29PM +0200, Nikolay Nikolaev wrote:
> > Vhost-user will implement the multiqueueu support in a similar way to
> > what
> 
> multiqueue
> 
> > vhost already has - a separate thread for each queue.
> >
> > To enable the multiqueue funcionality - a new command line parameter
> > "queues" is introduced for the vhost-user netdev.
> >
> > Changes since v1:
> >  - use s->nc.info_str when bringing up/down the backend
> >
> > Signed-off-by: Nikolay Nikolaev <n.nikol...@virtualopensystems.com>
> > ---
> >  docs/specs/vhost-user.txt |    5 +++++
> >  hw/virtio/vhost-user.c    |    6 +++++-
> >  net/vhost-user.c          |   39 +++++++++++++++++++++++++--------------
> >  qapi-schema.json          |    6 +++++-
> >  qemu-options.hx           |    5 +++--
> >  5 files changed, 43 insertions(+), 18 deletions(-)
> >
> > diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> > index 650bb18..d7b208c 100644
> > --- a/docs/specs/vhost-user.txt
> > +++ b/docs/specs/vhost-user.txt
> 
> I've been thinking that the protocol might be a useful addition to the virtio
> spec. For this, as a minimum you would have to submit this document as a
> comment to virtio TC with a proposal to include it in the virtio spec.
> See
> https://www.oasis-
> open.org/committees/comments/index.php?wg_abbrev=virtio
> 
> Can you do this?
> 
> We can take it from there, though I would encourage your company to join
> as a contributor.
> 
> 
> > @@ -127,6 +127,11 @@ in the ancillary data:
> >  If Master is unable to send the full message or receives a wrong
> > reply it will  close the connection. An optional reconnection mechanism can
> be implemented.
> >
> > +Multi queue suport
> > +---------------------
> > +The protocol supports multiple queues by setting all index fields in
> > +the sent messages to a properly calculated value.
> > +
> 
> Something that's not clear from this document is what happens with control
> VQ.
> Can you clarify please?
> 
> 
> >  Message types
> >  -------------
> >
> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index
> > aefe0bb..83ebcaa 100644
> > --- a/hw/virtio/vhost-user.c
> > +++ b/hw/virtio/vhost-user.c
> > @@ -253,17 +253,20 @@ static int vhost_user_call(struct vhost_dev *dev,
> unsigned long int request,
> >      case VHOST_SET_VRING_NUM:
> >      case VHOST_SET_VRING_BASE:
> >          memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> > +        msg.state.index += dev->vq_index;
> >          msg.size = sizeof(m.state);
> >          break;
> >
> >      case VHOST_GET_VRING_BASE:
> >          memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
> > +        msg.state.index += dev->vq_index;
> >          msg.size = sizeof(m.state);
> >          need_reply = 1;
> >          break;
> >
> >      case VHOST_SET_VRING_ADDR:
> >          memcpy(&msg.addr, arg, sizeof(struct vhost_vring_addr));
> > +        msg.addr.index += dev->vq_index;
> >          msg.size = sizeof(m.addr);
> >          break;
> >
> > @@ -271,7 +274,7 @@ static int vhost_user_call(struct vhost_dev *dev,
> unsigned long int request,
> >      case VHOST_SET_VRING_CALL:
> >      case VHOST_SET_VRING_ERR:
> >          file = arg;
> > -        msg.u64 = file->index & VHOST_USER_VRING_IDX_MASK;
> > +        msg.u64 = (file->index + dev->vq_index) &
> > + VHOST_USER_VRING_IDX_MASK;

I identify one vq_index issue here when it is the case of VHOST_SET_VRING_CALL,
The vq_index is not initialized before it is used here, so it could be a random 
value.
It leads to error in vhost, when this random value is passed to vhost and vhost 
use this random value to set the vring call.
 
I have a quick fix for this, code changes as the following:
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 4e3a061..2fbdb93 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -157,6 +157,7 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)

     net->dev.nvqs = 2;
     net->dev.vqs = net->vqs;
+    net->dev.vq_index = net->nc->queue_index;

     r = vhost_dev_init(&net->dev, options->opaque,
                        options->backend_type, options->force);
diff --git a/net/vhost-user.c b/net/vhost-user.c
index a0b4af2..b27190f 100644
--- a/net/vhost-user.c
+++ b/net/vhost-user.c
@@ -152,6 +152,7 @@ static int net_vhost_user_init(NetClientState *peer, const 
char *device,
         s->nc.receive_disabled = 1;
         s->chr = chr;
         s->vhostforce = vhostforce;
+        s->nc.queue_index = i;

         qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
     }

Would you guys have a look at this issue?
And I also met other issues when I try to run with 2 virtio ports, each one has 
2 queues
But not root cause yet, need further investigate,
So I think it may have more issues in qemu for vhost user multiple queues 
enabling.

> >          msg.size = sizeof(m.u64);
> >          if (ioeventfd_enabled() && file->fd > 0) {
> >              fds[fd_num++] = file->fd; @@ -313,6 +316,7 @@ static int
> > vhost_user_call(struct vhost_dev *dev, unsigned long int request,
> >                  error_report("Received bad msg size.\n");
> >                  return -1;
> >              }
> > +            msg.state.index -= dev->vq_index;
> >              memcpy(arg, &msg.state, sizeof(struct vhost_vring_state));
> >              break;
> >          default:
> > diff --git a/net/vhost-user.c b/net/vhost-user.c index
> > 24e050c..a0b4af2 100644
> > --- a/net/vhost-user.c
> > +++ b/net/vhost-user.c
> > @@ -122,37 +122,39 @@ static void net_vhost_user_event(void *opaque,
> int event)
> >      case CHR_EVENT_OPENED:
> >          vhost_user_start(s);
> >          net_vhost_link_down(s, false);
> > -        error_report("chardev \"%s\" went up\n", s->chr->label);
> > +        error_report("chardev \"%s\" went up\n", s->nc.info_str);
> >          break;
> >      case CHR_EVENT_CLOSED:
> >          net_vhost_link_down(s, true);
> >          vhost_user_stop(s);
> > -        error_report("chardev \"%s\" went down\n", s->chr->label);
> > +        error_report("chardev \"%s\" went down\n", s->nc.info_str);
> >          break;
> >      }
> >  }
> >
> >  static int net_vhost_user_init(NetClientState *peer, const char *device,
> >                                 const char *name, CharDriverState *chr,
> > -                               bool vhostforce)
> > +                               bool vhostforce, uint32_t queues)
> >  {
> >      NetClientState *nc;
> >      VhostUserState *s;
> > +    int i;
> >
> > -    nc = qemu_new_net_client(&net_vhost_user_info, peer, device,
> name);
> > +    for (i = 0; i < queues; i++) {
> > +        nc = qemu_new_net_client(&net_vhost_user_info, peer, device,
> > + name);
> >
> > -    snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > -             chr->label);
> > +        snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
> > +                 i, chr->label);
> >
> > -    s = DO_UPCAST(VhostUserState, nc, nc);
> > +        s = DO_UPCAST(VhostUserState, nc, nc);
> >
> > -    /* We don't provide a receive callback */
> > -    s->nc.receive_disabled = 1;
> > -    s->chr = chr;
> > -    s->vhostforce = vhostforce;
> > -
> > -    qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event,
> s);
> > +        /* We don't provide a receive callback */
> > +        s->nc.receive_disabled = 1;
> > +        s->chr = chr;
> > +        s->vhostforce = vhostforce;
> >
> > +        qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event,
> s);
> > +    }
> >      return 0;
> >  }
> >
> > @@ -228,6 +230,7 @@ static int net_vhost_check_net(QemuOpts *opts,
> > void *opaque)  int net_init_vhost_user(const NetClientOptions *opts,
> const char *name,
> >                          NetClientState *peer)  {
> > +    uint32_t queues;
> >      const NetdevVhostUserOptions *vhost_user_opts;
> >      CharDriverState *chr;
> >      bool vhostforce;
> > @@ -254,5 +257,13 @@ int net_init_vhost_user(const NetClientOptions
> *opts, const char *name,
> >          vhostforce = false;
> >      }
> >
> > -    return net_vhost_user_init(peer, "vhost_user", name, chr, vhostforce);
> > +    /* number of queues for multiqueue */
> > +    if (vhost_user_opts->has_queues) {
> > +        queues = vhost_user_opts->queues;
> > +    } else {
> > +        queues = 1;
> > +    }
> > +
> > +    return net_vhost_user_init(peer, "vhost_user", name, chr, vhostforce,
> > +                               queues);
> >  }
> > diff --git a/qapi-schema.json b/qapi-schema.json index
> > e16f8eb..c2cead0 100644
> > --- a/qapi-schema.json
> > +++ b/qapi-schema.json
> > @@ -2287,12 +2287,16 @@
> >  #
> >  # @vhostforce: #optional vhost on for non-MSIX virtio guests (default:
> false).
> >  #
> > +# @queues: #optional number of queues to be created for multiqueue
> vhost-user
> > +#          (Since 2.3)
> > +#
> >  # Since 2.1
> >  ##
> >  { 'type': 'NetdevVhostUserOptions',
> >    'data': {
> >      'chardev':        'str',
> > -    '*vhostforce':    'bool' } }
> > +    '*vhostforce':    'bool',
> > +    '*queues':        'uint32' } }
> >
> >  ##
> >  # @NetClientOptions
> > diff --git a/qemu-options.hx b/qemu-options.hx index 85ca3ad..b5fa61f
> > 100644
> > --- a/qemu-options.hx
> > +++ b/qemu-options.hx
> > @@ -1894,13 +1894,14 @@ The hubport netdev lets you connect a NIC to a
> > QEMU "vlan" instead of a single  netdev.  @code{-net} and
> > @code{-device} with parameter @option{vlan} create the  required hub
> automatically.
> >
> > -@item -netdev vhost-user,chardev=@var{id}[,vhostforce=on|off]
> > +@item -netdev
> > +vhost-user,chardev=@var{id}[,vhostforce=on|off][,queues=n]
> >
> >  Establish a vhost-user netdev, backed by a chardev @var{id}. The
> > chardev should  be a unix domain socket backed one. The vhost-user
> > uses a specifically defined  protocol to pass vhost ioctl replacement
> > messages to an application on the other  end of the socket. On
> > non-MSIX guests, the feature can be forced with -@var{vhostforce}.
> > +@var{vhostforce}. Use 'queues=@var{n}' to specify the number of
> > +queues to be created for multiqueue vhost-user.
> >
> >  Example:
> >  @example

Reply via email to