On Fri, Sep 09, 2022 at 10:01:16AM +0200, Eugenio Perez Martin wrote: > On Fri, Sep 9, 2022 at 8:40 AM Jason Wang <jasow...@redhat.com> wrote: > > > > On Fri, Sep 9, 2022 at 2:38 PM Jason Wang <jasow...@redhat.com> wrote: > > > > > > On Wed, Sep 7, 2022 at 12:36 AM Eugenio Pérez <epere...@redhat.com> wrote: > > > > > > > > To have enabled vlans at device startup may happen in the destination of > > > > a live migration, so this configuration must be restored. > > > > > > > > At this moment the code is not accessible, since SVQ refuses to start if > > > > vlan feature is exposed by the device. > > > > > > > > Signed-off-by: Eugenio Pérez <epere...@redhat.com> > > > > --- > > > > net/vhost-vdpa.c | 46 ++++++++++++++++++++++++++++++++++++++++++++-- > > > > 1 file changed, 44 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c > > > > index 4bc3fd01a8..ecbfd08eb9 100644 > > > > --- a/net/vhost-vdpa.c > > > > +++ b/net/vhost-vdpa.c > > > > @@ -100,6 +100,8 @@ static const uint64_t vdpa_svq_device_features = > > > > BIT_ULL(VIRTIO_NET_F_RSC_EXT) | > > > > BIT_ULL(VIRTIO_NET_F_STANDBY); > > > > > > > > +#define MAX_VLAN (1 << 12) /* Per 802.1Q definition */ > > > > + > > > > VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc) > > > > { > > > > VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc); > > > > @@ -423,6 +425,47 @@ static int vhost_vdpa_net_load_mq(VhostVDPAState > > > > *s, > > > > return *s->status != VIRTIO_NET_OK; > > > > } > > > > > > > > +static int vhost_vdpa_net_load_single_vlan(VhostVDPAState *s, > > > > + const VirtIONet *n, > > > > + uint16_t vid) > > > > +{ > > > > + ssize_t dev_written = vhost_vdpa_net_load_cmd(s, > > > > VIRTIO_NET_CTRL_VLAN, > > > > + > > > > VIRTIO_NET_CTRL_VLAN_ADD, > > > > + &vid, sizeof(vid)); > > > > + if (unlikely(dev_written < 0)) { > > > > + return dev_written; > > > > + } > > > > + > > > > + if (unlikely(*s->status != VIRTIO_NET_OK)) { > > > > + return -EINVAL; > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static int vhost_vdpa_net_load_vlan(VhostVDPAState *s, > > > > + const VirtIONet *n) > > > > +{ > > > > + uint64_t features = n->parent_obj.guest_features; > > > > + > > > > + if (!(features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN))) { > > > > + return 0; > > > > + } > > > > + > > > > + for (int i = 0; i < MAX_VLAN >> 5; i++) { > > > > + for (int j = 0; n->vlans[i] && j <= 0x1f; j++) { > > > > + if (n->vlans[i] & (1U << j)) { > > > > + int r = vhost_vdpa_net_load_single_vlan(s, n, (i << 5) > > > > + j); > > > > > > This seems to cause a lot of latency if the driver has a lot of vlans. > > > > > > I wonder if it's simply to let all vlan traffic go by disabling > > > CTRL_VLAN feature at vDPA layer. > > > > The guest will not be able to recover that vlan configuration. > > > Another idea is to extend the spec to allow us to accept a bitmap of > > the vlan ids via a single command, then we will be fine. > > > > I'm not sure if adding more ways to configure something is the answer, > but I'd be ok to implement it. > > Another idea is to allow the sending of many CVQ commands in parallel. > It shouldn't be very hard to achieve using exposed buffers as ring > buffers, and it will short down the start of the devices with many > features.
This would seem like a reasonable second step. A first step would be to measure the overhead to make sure there's actually a need to optimize this. > Thanks! > > > Thanks > > > > > > > > Thanks > > > > > > > + if (unlikely(r != 0)) { > > > > + return r; > > > > + } > > > > + } > > > > + } > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > + > > > > static int vhost_vdpa_net_load(NetClientState *nc) > > > > { > > > > VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc); > > > > @@ -445,8 +488,7 @@ static int vhost_vdpa_net_load(NetClientState *nc) > > > > if (unlikely(r)) { > > > > return r; > > > > } > > > > - > > > > - return 0; > > > > + return vhost_vdpa_net_load_vlan(s, n); > > > > } > > > > > > > > static NetClientInfo net_vhost_vdpa_cvq_info = { > > > > -- > > > > 2.31.1 > > > > > >