> From: Jason Wang <jasow...@redhat.com>
> Sent: Monday, July 5, 2021 10:05 AM
> 
> 在 2021/7/2 下午2:04, Parav Pandit 写道:
> >
> >> From: Jason Wang <jasow...@redhat.com>
> >> Sent: Thursday, July 1, 2021 1:13 PM
> >>
> >>
> >> Actually it depends on what attributes is required for building the config.
> >>
> >> We can simply reuse the existing virtio_net_config, if most of the
> >> fields are required.
> >>
> >> struct virtio_net_config_set {
> >>           __virtio64 features;
> >>           union {
> >>               struct virtio_net_config;
> >>               __virtio64 reserved[64];
> >>           }
> >> };
> >>
> >> If only few of the is required, we can just pick them and use another
> >> structure.
> > The point is we define structure based on current fields. Tomorrow a new
> RSS or rx scaling scheme appears, and structure size might need change.
> > And it demands us to go back to length based typecasting code.
> > and to avoid some length check we pick some arbitrary size reserved
> words.
> > And I do not know what network research group will come up for new rss
> algorithm and needed plumbing.
> 
> 
> Yes, but as discussed, we may suffer the similar issue at the device level. 
> E.g
> we need a command to let PF to "build" the config for a VF or SF.
I am not sure.
Current scope of a VDPA is, once there is a has PF,VF,SF and you configure or 
create a vdpa device out of it.

> > Given the device config is not spelled out in the virtio spec, may be we can
> wait for it to define virtio management interface.
> 
> Yes.
Wait is needed only if we want to cast U->K UAPI in a structure which is bound 
to evolve.
And hence I just want to exchange as individual fields.

> >> It's not arbitrary but with fixed length.
> > Its fixed, but decided arbitrarily large in anticipation that we likely 
> > need to
> grow.
> > And sometimes that fall short when next research comes up with more
> creative thoughts.
> 
> 
> How about something like TLVs in the virtio spec then?
Possibly yes.
> 
> 
> >
> >> It may only work for netlink (with some duplication with the existing
> >> virtio uAPI). If we can solve it at general virtio layer, it would be
> >> better. Otherwise we need to invent them again in the virtio spec.
> >>
> > Virtio spec will likely define what should be config fields to program and 
> > its
> layout.
> > Kernel can always fill up the format that virtio spec demands.
> 
> 
> Yes, I wonder if you have the interest to work on the spec to support this.
> 
I am happy to contribute, I need to ask my supervisor to spend some time in 
this area.
Let me figure out the logistics.

> 
> >
> >> I think even for the current mlx5e vDPA it would be better, otherwise we
> >> may have:
> >>
> >> vDPA tool -> [netlink specific vDPA attributes(1)] -> vDPA core -> [vDPA
> >> core specific VDPA attributes(2)] -> mlx5e_vDPA -> [mlx5e specific vDPA
> >> attributes(3)] -> mlx5e_core
> >>
> >> We need to use a single and unified virtio structure in all the (1), (2)
> >> and (3).
> > This is where I differ.
> > Its only vdpa tool -> vdpa core -> vendor_driver
> >
> > Vdpa tool -> vdpa core = netlink attribute
> > Vdpa core -> vendor driver = struct_foo. (internal inside the linux kernel)
> >
> > If tomorrow virtio spec defines struct_foo to be something else, kernel can
> always upgrade to struct_bar without upgrading UAPI netlink attributes.
> 
> 
> That's fine. Note that actually have an extra level if vendor_driver is
> virtio-pci vDPA driver (vp_vdpa).
> 
> Then we have
> 
> vdpa tool -> vdpa core -> vp_vdpa -> virtio-pci device
> 
> So we still need invent commands to configure/build VF/SF config space
> between vp_vdpa and virtio-pci device. 
Yes. This is needed, but again lets keep the two layers separate.
In the example I provided, we will be able to fill the structure and pass this 
internally between vp_vdpa->virtio pci driver.


> And I think we may suffer the
> similar issue as we met here (vdpa tool -> vdpa core).
> 
> 
> > Netlink attributes addition will be needed only when struct_foo has newer
> fields.
> > This will be still forward/backward compatible.
> >
> > An exact example of this is drivers/net/vxlan.c
> > vxlan_nl2conf().
> > A vxlan device needs VNI, src ip, dst ip, tos, and more.
> > Instead of putting all in single structure vxlan_config as UAPI, those
> optional fields are netlink attributes.
> > And vxlan driver internally fills up the config structure.
> >
> > I am very much convinced with the above vxlan approach that enables all
> functionality needed without typecasting code and without defining arbitrary
> length structs.
> 
> 
> Right, but we had some small differences here:
> 
> 1) vxlan doesn't have a existing uAPI
> 2) vxlan configuration is not used for hardware
> 
True but vxlan example doesn’t prevent to do #2.

> Basically, I'm not against this approach, I just wonder if it's
> better/simpler to solve it at virtio layer because the semantic is
> defined by the spec not netlink.

vdpa core will be able to use the virtio spec defined config whenever it occurs.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to