Re: [PATCH 2/6] virtio: split: alloc indirect desc with extra

2022-01-09 Thread Xuan Zhuo
On Mon, 10 Jan 2022 15:41:27 +0800, Jason Wang  wrote:
> On Mon, Jan 10, 2022 at 3:24 PM Xuan Zhuo  wrote:
> >
> > On Mon, 10 Jan 2022 14:43:39 +0800, Jason Wang  wrote:
> > >
> > > 在 2022/1/7 下午2:33, Xuan Zhuo 写道:
> > > > In the scenario where indirect is not used, each desc corresponds to an
> > > > extra, which is used to record information such as dma, flags, and
> > > > next.
> > > >
> > > > In the scenario of using indirect, the assigned desc does not have the
> > > > corresponding extra record dma information, and the dma information must
> > > > be obtained from the desc when unmap.
> > > >
> > > > This patch allocates the corresponding extra array when indirect desc is
> > > > allocated. This has these advantages:
> > > > 1. Record the dma information of desc, no need to read desc when unmap
> > > > 2. It will be more convenient and unified in processing
> > > > 3. Some additional information can be recorded in extra, which will be
> > > > used in subsequent patches.
> > >
> > >
> > > Two questions:
> > >
> > > 1) Is there any performance number for this change? I guess it gives
> > > more stress on the cache.
> >
> > I will add performance test data in the next version.
> >
> > > 2) Is there a requirement to mix the pre mapped sg with unmapped sg? If
> > > not, a per virtqueue flag looks sufficient
> >
> > There is this requirement. For example, in the case of AF_XDP, a patcket
> > contains two parts, one is virtio_net_hdr, and the other is the actual data
> > packet from AF_XDP. The former is unmapped sg, and the latter is pre mapped 
> > sg.
>
> Any chance to map virtio_net_hdr() manually by AF_XDP routine in this case?

Well, it is indeed possible to do so. In the indirect scenario, we can record it
in vring->split.desc_extra[head].flags

Then we have to agree that there can be no mixed situation.

Thanks.

>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > >
> >
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 2/6] virtio: split: alloc indirect desc with extra

2022-01-09 Thread Jason Wang
On Mon, Jan 10, 2022 at 3:24 PM Xuan Zhuo  wrote:
>
> On Mon, 10 Jan 2022 14:43:39 +0800, Jason Wang  wrote:
> >
> > 在 2022/1/7 下午2:33, Xuan Zhuo 写道:
> > > In the scenario where indirect is not used, each desc corresponds to an
> > > extra, which is used to record information such as dma, flags, and
> > > next.
> > >
> > > In the scenario of using indirect, the assigned desc does not have the
> > > corresponding extra record dma information, and the dma information must
> > > be obtained from the desc when unmap.
> > >
> > > This patch allocates the corresponding extra array when indirect desc is
> > > allocated. This has these advantages:
> > > 1. Record the dma information of desc, no need to read desc when unmap
> > > 2. It will be more convenient and unified in processing
> > > 3. Some additional information can be recorded in extra, which will be
> > > used in subsequent patches.
> >
> >
> > Two questions:
> >
> > 1) Is there any performance number for this change? I guess it gives
> > more stress on the cache.
>
> I will add performance test data in the next version.
>
> > 2) Is there a requirement to mix the pre mapped sg with unmapped sg? If
> > not, a per virtqueue flag looks sufficient
>
> There is this requirement. For example, in the case of AF_XDP, a patcket
> contains two parts, one is virtio_net_hdr, and the other is the actual data
> packet from AF_XDP. The former is unmapped sg, and the latter is pre mapped 
> sg.

Any chance to map virtio_net_hdr() manually by AF_XDP routine in this case?

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> >
>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 6/6] virtio: add api virtio_dma_map() for advance dma

2022-01-09 Thread Xuan Zhuo
On Mon, 10 Jan 2022 02:12:23 -0500, Michael S. Tsirkin  wrote:
> On Fri, Jan 07, 2022 at 02:33:06PM +0800, Xuan Zhuo wrote:
> > Added virtio_dma_map() to map DMA addresses for virtual memory in
> > advance. The purpose of adding this function is to check
> > vring_use_dma_api() for virtio dma operation and get vdev->dev.parent as
> > the parameter of dma_map_page().
> >
> > Added virtio_dma_unmap() for unmap DMA address.
> >
> > Signed-off-by: Xuan Zhuo 
>
>
> OK but where are the users for this new API?

My subsequent patch set (virtio net supports AF_XDP) will use this API.

>
>
> > ---
> >  drivers/virtio/virtio_ring.c | 47 
> >  include/linux/virtio.h   |  9 +++
> >  2 files changed, 56 insertions(+)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index e165bc2e1344..f4a0fb85df27 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -2472,4 +2472,51 @@ const struct vring *virtqueue_get_vring(struct 
> > virtqueue *vq)
> >  }
> >  EXPORT_SYMBOL_GPL(virtqueue_get_vring);
> >
> > +/**
> > + * virtio_dma_map - get the DMA addr of the memory for virtio device
> > + * @vdev: virtio device
> > + * @page: the page of the memory to DMA
> > + * @offset: the offset of the memory inside page
> > + * @length: memory length
> > + * @dir: DMA direction
> > + *
> > + * Returns the DMA addr. Zero means error.
>
> Should not drivers use a variant of dma_mapping_error to check?

The purpose of this is to wrap vdev->dev.parent. Otherwise, the driver will
directly access vdev->dev.parent.

>
> > + */
> > +dma_addr_t virtio_dma_map(struct virtio_device *vdev,
> > + struct page *page, size_t offset,
> > + unsigned int length,
> > + enum dma_data_direction dir)
> > +{
> > +   dma_addr_t addr;
> > +
> > +   if (!vring_use_dma_api(vdev))
> > +   return page_to_phys(page) + offset;
> > +
> > +   addr = dma_map_page(vdev->dev.parent, page, offset, length, dir);
> > +
> > +   if (dma_mapping_error(vdev->dev.parent, addr))
> > +   return 0;
> > +
> > +   return addr;
> > +}
> > +EXPORT_SYMBOL_GPL(virtio_dma_map);
>
>
> Yes it's 0, but you should really use DMA_MAPPING_ERROR.

Yes, I should use DMA_MAPPING_ERROR.

Thanks.


>
> > +
> > +/**
> > + * virtio_dma_unmap - unmap DMA addr
> > + * @vdev: virtio device
> > + * @dma: DMA address
> > + * @length: memory length
> > + * @dir: DMA direction
> > + */
> > +void virtio_dma_unmap(struct virtio_device *vdev,
> > + dma_addr_t dma, unsigned int length,
> > + enum dma_data_direction dir)
> > +{
> > +   if (!vring_use_dma_api(vdev))
> > +   return;
> > +
> > +   dma_unmap_page(vdev->dev.parent, dma, length, dir);
> > +}
> > +EXPORT_SYMBOL_GPL(virtio_dma_unmap);
> > +
> >  MODULE_LICENSE("GPL");
> > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > index 41edbc01ffa4..6e6c6e18ecf8 100644
> > --- a/include/linux/virtio.h
> > +++ b/include/linux/virtio.h
> > @@ -9,6 +9,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >
> >  /**
> >   * virtqueue - a queue to register buffers for sending or receiving.
> > @@ -195,4 +196,12 @@ void unregister_virtio_driver(struct virtio_driver 
> > *drv);
> >  #define module_virtio_driver(__virtio_driver) \
> > module_driver(__virtio_driver, register_virtio_driver, \
> > unregister_virtio_driver)
> > +
> > +dma_addr_t virtio_dma_map(struct virtio_device *vdev,
> > + struct page *page, size_t offset,
> > + unsigned int length,
> > + enum dma_data_direction dir);
> > +void virtio_dma_unmap(struct virtio_device *vdev,
> > + dma_addr_t dma, unsigned int length,
> > + enum dma_data_direction dir);
> >  #endif /* _LINUX_VIRTIO_H */
> > --
> > 2.31.0
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 4/6] virtio: split: virtqueue_add_split() support dma address

2022-01-09 Thread Xuan Zhuo
On Mon, 10 Jan 2022 14:45:42 +0800, Jason Wang  wrote:
>
> 在 2022/1/7 下午2:33, Xuan Zhuo 写道:
> > virtqueue_add_split() only supports virtual addresses, dma is completed
> > in virtqueue_add_split().
> >
> > In some scenarios (such as the AF_XDP scenario), the memory is allocated
> > and DMA is completed in advance, so it is necessary for us to support
> > passing the DMA address to virtqueue_add_split().
> >
> > This patch stipulates that if sg->dma_address is not NULL, use this
> > address as the DMA address. And record this information in extra->flags,
> > which can be skipped when executing dma unmap.
> >
> >  extra->flags |= VRING_DESC_F_PREDMA;
>
>
> I think we need another name other than the VRING_DESC_F prefix since
> it's for the flag defined in the spec. May VIRTIO_DESC_F_PREDMA.
>

OK.

> Thanks
>
>
> >
> > This relies on the previous patch, in the indirect scenario, for each
> > desc allocated, an extra is allocated at the same time.
> >
> > Signed-off-by: Xuan Zhuo 
> > ---
> >   drivers/virtio/virtio_ring.c | 28 
> >   1 file changed, 24 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 7420741cb750..add8430d9678 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -66,6 +66,9 @@
> >   #define LAST_ADD_TIME_INVALID(vq)
> >   #endif
> >
> > +/* This means the buffer dma is pre-alloc. Just used by vring_desc_extra */
> > +#define VRING_DESC_F_PREDMA(1 << 15)
> > +
> >   struct vring_desc_extra {
> > dma_addr_t addr;/* Descriptor DMA addr. */
> > u32 len;/* Descriptor length. */
> > @@ -336,11 +339,19 @@ static inline struct device *vring_dma_dev(const 
> > struct vring_virtqueue *vq)
> > return vq->vq.vdev->dev.parent;
> >   }
> >
> > +static inline bool sg_is_predma(struct scatterlist *sg)
> > +{
> > +   return !!sg->dma_address;
> > +}
> > +
> >   /* Map one sg entry. */
> >   static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
> >struct scatterlist *sg,
> >enum dma_data_direction direction)
> >   {
> > +   if (sg_is_predma(sg))
> > +   return sg_dma_address(sg);
> > +
> > if (!vq->use_dma_api)
> > return (dma_addr_t)sg_phys(sg);
> >
> > @@ -396,6 +407,9 @@ static unsigned int vring_unmap_one_split(const struct 
> > vring_virtqueue *vq,
> >  (flags & VRING_DESC_F_WRITE) ?
> >  DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > } else {
> > +   if (flags & VRING_DESC_F_PREDMA)
> > +   goto out;
> > +
> > dma_unmap_page(vring_dma_dev(vq),
> >extra->addr,
> >extra->len,
> > @@ -441,7 +455,8 @@ static inline unsigned int 
> > virtqueue_add_desc_split(struct virtqueue *vq,
> > unsigned int i,
> > dma_addr_t addr,
> > unsigned int len,
> > -   u16 flags)
> > +   u16 flags,
> > +   bool predma)
> >   {
> > struct vring_virtqueue *vring = to_vvq(vq);
> > struct vring_desc_extra *extra;
> > @@ -468,6 +483,9 @@ static inline unsigned int 
> > virtqueue_add_desc_split(struct virtqueue *vq,
> > extra->len = len;
> > extra->flags = flags;
> >
> > +   if (predma)
> > +   extra->flags |= VRING_DESC_F_PREDMA;
> > +
> > return next;
> >   }
> >
> > @@ -547,7 +565,8 @@ static inline int virtqueue_add_split(struct virtqueue 
> > *_vq,
> >  * table since it use stream DMA mapping.
> >  */
> > i = virtqueue_add_desc_split(_vq, in, i, addr, 
> > sg->length,
> > -VRING_DESC_F_NEXT);
> > +VRING_DESC_F_NEXT,
> > +sg_is_predma(sg));
> > }
> > }
> > for (; n < (out_sgs + in_sgs); n++) {
> > @@ -563,7 +582,8 @@ static inline int virtqueue_add_split(struct virtqueue 
> > *_vq,
> > i = virtqueue_add_desc_split(_vq, in, i, addr,
> >  sg->length,
> >  VRING_DESC_F_NEXT |
> > -VRING_DESC_F_WRITE);
> > +VRING_DESC_F_WRITE,
> > +sg_is_predma(sg));
> > }
> > }
> > /* Last one doesn't continue. */
> > @@ -582,7 +602,7 @@ static inline int virtqueue_add_split(struct virtqueue 
> > *_vq,
> >
> >   

Re: [PATCH 2/6] virtio: split: alloc indirect desc with extra

2022-01-09 Thread Xuan Zhuo
On Mon, 10 Jan 2022 14:43:39 +0800, Jason Wang  wrote:
>
> 在 2022/1/7 下午2:33, Xuan Zhuo 写道:
> > In the scenario where indirect is not used, each desc corresponds to an
> > extra, which is used to record information such as dma, flags, and
> > next.
> >
> > In the scenario of using indirect, the assigned desc does not have the
> > corresponding extra record dma information, and the dma information must
> > be obtained from the desc when unmap.
> >
> > This patch allocates the corresponding extra array when indirect desc is
> > allocated. This has these advantages:
> > 1. Record the dma information of desc, no need to read desc when unmap
> > 2. It will be more convenient and unified in processing
> > 3. Some additional information can be recorded in extra, which will be
> > used in subsequent patches.
>
>
> Two questions:
>
> 1) Is there any performance number for this change? I guess it gives
> more stress on the cache.

I will add performance test data in the next version.

> 2) Is there a requirement to mix the pre mapped sg with unmapped sg? If
> not, a per virtqueue flag looks sufficient

There is this requirement. For example, in the case of AF_XDP, a patcket
contains two parts, one is virtio_net_hdr, and the other is the actual data
packet from AF_XDP. The former is unmapped sg, and the latter is pre mapped sg.

Thanks.

>
> Thanks
>
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 6/6] virtio: add api virtio_dma_map() for advance dma

2022-01-09 Thread Michael S. Tsirkin
On Fri, Jan 07, 2022 at 02:33:06PM +0800, Xuan Zhuo wrote:
> Added virtio_dma_map() to map DMA addresses for virtual memory in
> advance. The purpose of adding this function is to check
> vring_use_dma_api() for virtio dma operation and get vdev->dev.parent as
> the parameter of dma_map_page().
> 
> Added virtio_dma_unmap() for unmap DMA address.
> 
> Signed-off-by: Xuan Zhuo 


OK but where are the users for this new API?


> ---
>  drivers/virtio/virtio_ring.c | 47 
>  include/linux/virtio.h   |  9 +++
>  2 files changed, 56 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index e165bc2e1344..f4a0fb85df27 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -2472,4 +2472,51 @@ const struct vring *virtqueue_get_vring(struct 
> virtqueue *vq)
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_get_vring);
>  
> +/**
> + * virtio_dma_map - get the DMA addr of the memory for virtio device
> + * @vdev: virtio device
> + * @page: the page of the memory to DMA
> + * @offset: the offset of the memory inside page
> + * @length: memory length
> + * @dir: DMA direction
> + *
> + * Returns the DMA addr. Zero means error.

Should not drivers use a variant of dma_mapping_error to check?

> + */
> +dma_addr_t virtio_dma_map(struct virtio_device *vdev,
> +   struct page *page, size_t offset,
> +   unsigned int length,
> +   enum dma_data_direction dir)
> +{
> + dma_addr_t addr;
> +
> + if (!vring_use_dma_api(vdev))
> + return page_to_phys(page) + offset;
> +
> + addr = dma_map_page(vdev->dev.parent, page, offset, length, dir);
> +
> + if (dma_mapping_error(vdev->dev.parent, addr))
> + return 0;
> +
> + return addr;
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_map);


Yes it's 0, but you should really use DMA_MAPPING_ERROR.

> +
> +/**
> + * virtio_dma_unmap - unmap DMA addr
> + * @vdev: virtio device
> + * @dma: DMA address
> + * @length: memory length
> + * @dir: DMA direction
> + */
> +void virtio_dma_unmap(struct virtio_device *vdev,
> +   dma_addr_t dma, unsigned int length,
> +   enum dma_data_direction dir)
> +{
> + if (!vring_use_dma_api(vdev))
> + return;
> +
> + dma_unmap_page(vdev->dev.parent, dma, length, dir);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_unmap);
> +
>  MODULE_LICENSE("GPL");
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index 41edbc01ffa4..6e6c6e18ecf8 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -9,6 +9,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  /**
>   * virtqueue - a queue to register buffers for sending or receiving.
> @@ -195,4 +196,12 @@ void unregister_virtio_driver(struct virtio_driver *drv);
>  #define module_virtio_driver(__virtio_driver) \
>   module_driver(__virtio_driver, register_virtio_driver, \
>   unregister_virtio_driver)
> +
> +dma_addr_t virtio_dma_map(struct virtio_device *vdev,
> +   struct page *page, size_t offset,
> +   unsigned int length,
> +   enum dma_data_direction dir);
> +void virtio_dma_unmap(struct virtio_device *vdev,
> +   dma_addr_t dma, unsigned int length,
> +   enum dma_data_direction dir);
>  #endif /* _LINUX_VIRTIO_H */
> -- 
> 2.31.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v7 00/14] Allow for configuring max number of virtqueue pairs

2022-01-09 Thread Jason Wang
On Mon, Jan 10, 2022 at 3:04 PM Michael S. Tsirkin  wrote:
>
> On Wed, Jan 05, 2022 at 01:46:32PM +0200, Eli Cohen wrote:
> > Allow the user to configure the max number of virtqueue pairs for a vdpa
> > instance. The user can then control the actual number of virtqueue pairs
> > using ethtool.
>
> So I put a version of this in linux-next, but I had to squash in
> some bugfixes, and resolve some conflicts. Eli, please take a look
> and let me know whether it looks sane. If not pls post a new
> version.
> Jason, what is your take on merging this now? Si-wei here seems to want
> to defer, but OTOH it's up to v7 already, most patches are acked and
> most comments look like minor improvement suggestions to me.

I think we can merge them and send patches on top to fix issues if needed.

Thanks

>
> > Example, set number of VQPs to 2:
> > $ ethtool -L ens1 combined 2
> >
> > A user can check the max supported virtqueues for a management device by
> > running:
> >
> > $ $ vdpa mgmtdev show
> > auxiliary/mlx5_core.sf.1:
> >   supported_classes net
> >   max_supported_vqs 257
> >   dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ 
> > MQ \
> >CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
> >
> > and refer to this value when adding a device.
> >
> > To create a device with a max of 5 VQPs:
> > vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1 max_vqp 5
> >
> > Please note that for patches that were changed I removed "Reviewed-by"
> > and "Acked-by".
> >
> > v6 -> v7:
> > 1. Make use of cf_mutex for serializing netlink set/get with other
> > calls.
> > 2. Some fixes (See in each patch)
> > 3. Add patch for vdpa_sim to report supported features
> > 4. "Reviewed-by" and "Acked-by" removed from patch 0007 since it had
> > slightly changed.
> >
> > Eli Cohen (14):
> >   vdpa: Provide interface to read driver features
> >   vdpa/mlx5: Distribute RX virtqueues in RQT object
> >   vdpa: Sync calls set/get config/status with cf_mutex
> >   vdpa: Read device configuration only if FEATURES_OK
> >   vdpa: Allow to configure max data virtqueues
> >   vdpa/mlx5: Fix config_attr_mask assignment
> >   vdpa/mlx5: Support configuring max data virtqueue
> >   vdpa: Add support for returning device configuration information
> >   vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps()
> >   vdpa: Support reporting max device capabilities
> >   vdpa/mlx5: Report max device capabilities
> >   vdpa/vdpa_sim: Configure max supported virtqueues
> >   vdpa: Use BIT_ULL for bit operations
> >   vdpa/vdpa_sim_net: Report max device capabilities
> >
> >  drivers/vdpa/alibaba/eni_vdpa.c  |  16 +++-
> >  drivers/vdpa/ifcvf/ifcvf_main.c  |  16 +++-
> >  drivers/vdpa/mlx5/net/mlx5_vnet.c| 134 ---
> >  drivers/vdpa/vdpa.c  | 100 
> >  drivers/vdpa/vdpa_sim/vdpa_sim.c |  21 +++--
> >  drivers/vdpa/vdpa_sim/vdpa_sim_net.c |   2 +
> >  drivers/vdpa/vdpa_user/vduse_dev.c   |  16 +++-
> >  drivers/vdpa/virtio_pci/vp_vdpa.c|  16 +++-
> >  drivers/vhost/vdpa.c |  11 +--
> >  drivers/virtio/virtio_vdpa.c |   7 +-
> >  include/linux/vdpa.h |  36 +--
> >  include/uapi/linux/vdpa.h|   6 ++
> >  12 files changed, 271 insertions(+), 110 deletions(-)
> >
> > --
> > 2.34.1
>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v7 00/14] Allow for configuring max number of virtqueue pairs

2022-01-09 Thread Michael S. Tsirkin
On Wed, Jan 05, 2022 at 01:46:32PM +0200, Eli Cohen wrote:
> Allow the user to configure the max number of virtqueue pairs for a vdpa
> instance. The user can then control the actual number of virtqueue pairs
> using ethtool.

So I put a version of this in linux-next, but I had to squash in
some bugfixes, and resolve some conflicts. Eli, please take a look
and let me know whether it looks sane. If not pls post a new
version.
Jason, what is your take on merging this now? Si-wei here seems to want
to defer, but OTOH it's up to v7 already, most patches are acked and
most comments look like minor improvement suggestions to me.

> Example, set number of VQPs to 2:
> $ ethtool -L ens1 combined 2
> 
> A user can check the max supported virtqueues for a management device by
> running:
> 
> $ $ vdpa mgmtdev show
> auxiliary/mlx5_core.sf.1:
>   supported_classes net
>   max_supported_vqs 257
>   dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ MQ \
>CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
> 
> and refer to this value when adding a device.
> 
> To create a device with a max of 5 VQPs:
> vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.1 max_vqp 5
> 
> Please note that for patches that were changed I removed "Reviewed-by"
> and "Acked-by".
> 
> v6 -> v7:
> 1. Make use of cf_mutex for serializing netlink set/get with other
> calls.
> 2. Some fixes (See in each patch)
> 3. Add patch for vdpa_sim to report supported features
> 4. "Reviewed-by" and "Acked-by" removed from patch 0007 since it had
> slightly changed.
> 
> Eli Cohen (14):
>   vdpa: Provide interface to read driver features
>   vdpa/mlx5: Distribute RX virtqueues in RQT object
>   vdpa: Sync calls set/get config/status with cf_mutex
>   vdpa: Read device configuration only if FEATURES_OK
>   vdpa: Allow to configure max data virtqueues
>   vdpa/mlx5: Fix config_attr_mask assignment
>   vdpa/mlx5: Support configuring max data virtqueue
>   vdpa: Add support for returning device configuration information
>   vdpa/mlx5: Restore cur_num_vqs in case of failure in change_num_qps()
>   vdpa: Support reporting max device capabilities
>   vdpa/mlx5: Report max device capabilities
>   vdpa/vdpa_sim: Configure max supported virtqueues
>   vdpa: Use BIT_ULL for bit operations
>   vdpa/vdpa_sim_net: Report max device capabilities
> 
>  drivers/vdpa/alibaba/eni_vdpa.c  |  16 +++-
>  drivers/vdpa/ifcvf/ifcvf_main.c  |  16 +++-
>  drivers/vdpa/mlx5/net/mlx5_vnet.c| 134 ---
>  drivers/vdpa/vdpa.c  | 100 
>  drivers/vdpa/vdpa_sim/vdpa_sim.c |  21 +++--
>  drivers/vdpa/vdpa_sim/vdpa_sim_net.c |   2 +
>  drivers/vdpa/vdpa_user/vduse_dev.c   |  16 +++-
>  drivers/vdpa/virtio_pci/vp_vdpa.c|  16 +++-
>  drivers/vhost/vdpa.c |  11 +--
>  drivers/virtio/virtio_vdpa.c |   7 +-
>  include/linux/vdpa.h |  36 +--
>  include/uapi/linux/vdpa.h|   6 ++
>  12 files changed, 271 insertions(+), 110 deletions(-)
> 
> -- 
> 2.34.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v7 07/14] vdpa/mlx5: Support configuring max data virtqueue

2022-01-09 Thread Michael S. Tsirkin
On Fri, Jan 07, 2022 at 05:43:15PM -0800, Si-Wei Liu wrote:
> It's unfortunate. Don't know why this series got pulled into linux-next
> prematurely. The code review is still on going and there were outstanding
> comments that hadn't been addressed yet.

Most patches got acked, and the merge window is closing.
The only couple of issues seem to be with this specific patch, and I
think I fixed them up.
Still - I can hold them if necessary. What do others think?

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 4/6] virtio: split: virtqueue_add_split() support dma address

2022-01-09 Thread Jason Wang


在 2022/1/7 下午2:33, Xuan Zhuo 写道:

virtqueue_add_split() only supports virtual addresses, dma is completed
in virtqueue_add_split().

In some scenarios (such as the AF_XDP scenario), the memory is allocated
and DMA is completed in advance, so it is necessary for us to support
passing the DMA address to virtqueue_add_split().

This patch stipulates that if sg->dma_address is not NULL, use this
address as the DMA address. And record this information in extra->flags,
which can be skipped when executing dma unmap.

 extra->flags |= VRING_DESC_F_PREDMA;



I think we need another name other than the VRING_DESC_F prefix since 
it's for the flag defined in the spec. May VIRTIO_DESC_F_PREDMA.


Thanks




This relies on the previous patch, in the indirect scenario, for each
desc allocated, an extra is allocated at the same time.

Signed-off-by: Xuan Zhuo 
---
  drivers/virtio/virtio_ring.c | 28 
  1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 7420741cb750..add8430d9678 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -66,6 +66,9 @@
  #define LAST_ADD_TIME_INVALID(vq)
  #endif
  
+/* This means the buffer dma is pre-alloc. Just used by vring_desc_extra */

+#define VRING_DESC_F_PREDMA(1 << 15)
+
  struct vring_desc_extra {
dma_addr_t addr;/* Descriptor DMA addr. */
u32 len;/* Descriptor length. */
@@ -336,11 +339,19 @@ static inline struct device *vring_dma_dev(const struct 
vring_virtqueue *vq)
return vq->vq.vdev->dev.parent;
  }
  
+static inline bool sg_is_predma(struct scatterlist *sg)

+{
+   return !!sg->dma_address;
+}
+
  /* Map one sg entry. */
  static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
   struct scatterlist *sg,
   enum dma_data_direction direction)
  {
+   if (sg_is_predma(sg))
+   return sg_dma_address(sg);
+
if (!vq->use_dma_api)
return (dma_addr_t)sg_phys(sg);
  
@@ -396,6 +407,9 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,

 (flags & VRING_DESC_F_WRITE) ?
 DMA_FROM_DEVICE : DMA_TO_DEVICE);
} else {
+   if (flags & VRING_DESC_F_PREDMA)
+   goto out;
+
dma_unmap_page(vring_dma_dev(vq),
   extra->addr,
   extra->len,
@@ -441,7 +455,8 @@ static inline unsigned int virtqueue_add_desc_split(struct 
virtqueue *vq,
unsigned int i,
dma_addr_t addr,
unsigned int len,
-   u16 flags)
+   u16 flags,
+   bool predma)
  {
struct vring_virtqueue *vring = to_vvq(vq);
struct vring_desc_extra *extra;
@@ -468,6 +483,9 @@ static inline unsigned int virtqueue_add_desc_split(struct 
virtqueue *vq,
extra->len = len;
extra->flags = flags;
  
+	if (predma)

+   extra->flags |= VRING_DESC_F_PREDMA;
+
return next;
  }
  
@@ -547,7 +565,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,

 * table since it use stream DMA mapping.
 */
i = virtqueue_add_desc_split(_vq, in, i, addr, 
sg->length,
-VRING_DESC_F_NEXT);
+VRING_DESC_F_NEXT,
+sg_is_predma(sg));
}
}
for (; n < (out_sgs + in_sgs); n++) {
@@ -563,7 +582,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
i = virtqueue_add_desc_split(_vq, in, i, addr,
 sg->length,
 VRING_DESC_F_NEXT |
-VRING_DESC_F_WRITE);
+VRING_DESC_F_WRITE,
+sg_is_predma(sg));
}
}
/* Last one doesn't continue. */
@@ -582,7 +602,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
  
  		virtqueue_add_desc_split(_vq, NULL, head, addr,

 total_sg * sizeof(struct vring_desc),
-VRING_DESC_F_INDIRECT);
+VRING_DESC_F_INDIRECT, false);
}
  
  	/* We're using some buffers from the free l

Re: [PATCH 2/6] virtio: split: alloc indirect desc with extra

2022-01-09 Thread Jason Wang


在 2022/1/7 下午2:33, Xuan Zhuo 写道:

In the scenario where indirect is not used, each desc corresponds to an
extra, which is used to record information such as dma, flags, and
next.

In the scenario of using indirect, the assigned desc does not have the
corresponding extra record dma information, and the dma information must
be obtained from the desc when unmap.

This patch allocates the corresponding extra array when indirect desc is
allocated. This has these advantages:
1. Record the dma information of desc, no need to read desc when unmap
2. It will be more convenient and unified in processing
3. Some additional information can be recorded in extra, which will be
used in subsequent patches.



Two questions:

1) Is there any performance number for this change? I guess it gives 
more stress on the cache.
2) Is there a requirement to mix the pre mapped sg with unmapped sg? If 
not, a per virtqueue flag looks sufficient


Thanks


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed()

2022-01-09 Thread Jason Wang


在 2022/1/7 下午2:33, Xuan Zhuo 写道:

The actual parameter handled by vring_unmap_state_packed() is that
vring_desc_extra, so this function should use "extra" instead of "state".

Signed-off-by: Xuan Zhuo 



Acked-by: Jason Wang 



---
  drivers/virtio/virtio_ring.c | 17 -
  1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 028b05d44546..81531cbb08a7 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -984,24 +984,24 @@ static struct virtqueue *vring_create_virtqueue_split(
   * Packed ring specific functions - *_packed().
   */
  
-static void vring_unmap_state_packed(const struct vring_virtqueue *vq,

-struct vring_desc_extra *state)
+static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
+struct vring_desc_extra *extra)
  {
u16 flags;
  
  	if (!vq->use_dma_api)

return;
  
-	flags = state->flags;

+   flags = extra->flags;
  
  	if (flags & VRING_DESC_F_INDIRECT) {

dma_unmap_single(vring_dma_dev(vq),
-state->addr, state->len,
+extra->addr, extra->len,
 (flags & VRING_DESC_F_WRITE) ?
 DMA_FROM_DEVICE : DMA_TO_DEVICE);
} else {
dma_unmap_page(vring_dma_dev(vq),
-  state->addr, state->len,
+  extra->addr, extra->len,
   (flags & VRING_DESC_F_WRITE) ?
   DMA_FROM_DEVICE : DMA_TO_DEVICE);
}
@@ -1301,8 +1301,7 @@ static inline int virtqueue_add_packed(struct virtqueue 
*_vq,
for (n = 0; n < total_sg; n++) {
if (i == err_idx)
break;
-   vring_unmap_state_packed(vq,
-&vq->packed.desc_extra[curr]);
+   vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
curr = vq->packed.desc_extra[curr].next;
i++;
if (i >= vq->packed.vring.num)
@@ -1381,8 +1380,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
if (unlikely(vq->use_dma_api)) {
curr = id;
for (i = 0; i < state->num; i++) {
-   vring_unmap_state_packed(vq,
-   &vq->packed.desc_extra[curr]);
+   vring_unmap_extra_packed(vq,
+&vq->packed.desc_extra[curr]);
curr = vq->packed.desc_extra[curr].next;
}
}


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] virtio: Simplify DMA setting

2022-01-09 Thread Jason Wang


在 2022/1/8 下午3:08, Christophe JAILLET 写道:

As stated in [1], dma_set_mask() with a 64-bit mask will never fail if
dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.



I'd expect to be more verbose here. E.g I see dma_supported() who has a 
brunch of checks need to be called if dma_mask is non-NULL.


Thanks




Simplify code and remove some dead code accordingly.


While at it, include directly  instead on relying on
indirect inclusion.

[1]: https://lkml.org/lkml/2021/6/7/398

Signed-off-by: Christophe JAILLET 
---
  drivers/virtio/virtio_mmio.c   | 4 +---
  drivers/virtio/virtio_pci_legacy_dev.c | 7 +++
  drivers/virtio/virtio_pci_modern_dev.c | 6 ++
  3 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 56128b9c46eb..aa1efa854de1 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -617,9 +617,7 @@ static int virtio_mmio_probe(struct platform_device *pdev)
rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
}
if (rc)
-   rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
-   if (rc)
-   dev_warn(&pdev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying 
to continue, but this might not work.\n");
+   dev_warn(&pdev->dev, "Failed to enable DMA.  Trying to continue, but 
this might not work.\n");
  
  	platform_set_drvdata(pdev, vm_dev);
  
diff --git a/drivers/virtio/virtio_pci_legacy_dev.c b/drivers/virtio/virtio_pci_legacy_dev.c

index 677d1f68bc9b..52b1c4dd43fe 100644
--- a/drivers/virtio/virtio_pci_legacy_dev.c
+++ b/drivers/virtio/virtio_pci_legacy_dev.c
@@ -2,6 +2,7 @@
  
  #include "linux/virtio_pci.h"

  #include 
+#include 
  #include 
  #include 
  
@@ -26,9 +27,7 @@ int vp_legacy_probe(struct virtio_pci_legacy_device *ldev)

return -ENODEV;
  
  	rc = dma_set_mask(&pci_dev->dev, DMA_BIT_MASK(64));

-   if (rc) {
-   rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
-   } else {
+   if (!rc) {
/*
 * The virtio ring base address is expressed as a 32-bit PFN,
 * with a page size of 1 << VIRTIO_PCI_QUEUE_ADDR_SHIFT.
@@ -38,7 +37,7 @@ int vp_legacy_probe(struct virtio_pci_legacy_device *ldev)
}
  
  	if (rc)

-   dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  
Trying to continue, but this might not work.\n");
+   dev_warn(&pci_dev->dev, "Failed to enable DMA.  Trying to continue, 
but this might not work.\n");
  
  	rc = pci_request_region(pci_dev, 0, "virtio-pci-legacy");

if (rc)
diff --git a/drivers/virtio/virtio_pci_modern_dev.c 
b/drivers/virtio/virtio_pci_modern_dev.c
index e8b3ff2b9fbc..830dc269d68f 100644
--- a/drivers/virtio/virtio_pci_modern_dev.c
+++ b/drivers/virtio/virtio_pci_modern_dev.c
@@ -1,6 +1,7 @@
  // SPDX-License-Identifier: GPL-2.0-or-later
  
  #include 

+#include 
  #include 
  #include 
  
@@ -256,10 +257,7 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev)
  
  	err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));

if (err)
-   err = dma_set_mask_and_coherent(&pci_dev->dev,
-   DMA_BIT_MASK(32));
-   if (err)
-   dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  
Trying to continue, but this might not work.\n");
+   dev_warn(&pci_dev->dev, "Failed to enable DMA.  Trying to continue, 
but this might not work.\n");
  
  	/* Device capability is only mandatory for devices that have

 * device-specific configuration.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v7 03/14] vdpa: Sync calls set/get config/status with cf_mutex

2022-01-09 Thread Jason Wang


在 2022/1/8 上午9:23, Si-Wei Liu 写道:



On 1/6/2022 9:08 PM, Jason Wang wrote:


在 2022/1/7 上午8:33, Si-Wei Liu 写道:



On 1/5/2022 3:46 AM, Eli Cohen wrote:

Add wrappers to get/set status and protect these operations with
cf_mutex to serialize these operations with respect to get/set config
operations.

Signed-off-by: Eli Cohen 
---
  drivers/vdpa/vdpa.c  | 19 +++
  drivers/vhost/vdpa.c |  7 +++
  drivers/virtio/virtio_vdpa.c |  3 +--
  include/linux/vdpa.h |  3 +++
  4 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 42d71d60d5dc..5134c83c4a22 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -21,6 +21,25 @@ static LIST_HEAD(mdev_head);
  static DEFINE_MUTEX(vdpa_dev_mutex);
  static DEFINE_IDA(vdpa_index_ida);
  +u8 vdpa_get_status(struct vdpa_device *vdev)
+{
+    u8 status;
+
+    mutex_lock(&vdev->cf_mutex);
+    status = vdev->config->get_status(vdev);
+    mutex_unlock(&vdev->cf_mutex);
+    return status;
+}
+EXPORT_SYMBOL(vdpa_get_status);
+
+void vdpa_set_status(struct vdpa_device *vdev, u8 status)
+{
+    mutex_lock(&vdev->cf_mutex);
+    vdev->config->set_status(vdev, status);
+    mutex_unlock(&vdev->cf_mutex);
+}
+EXPORT_SYMBOL(vdpa_set_status);
+
  static struct genl_family vdpa_nl_family;
    static int vdpa_dev_probe(struct device *d)
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index ebaa373e9b82..d9d499465e2e 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -142,10 +142,9 @@ static long vhost_vdpa_get_device_id(struct 
vhost_vdpa *v, u8 __user *argp)
  static long vhost_vdpa_get_status(struct vhost_vdpa *v, u8 __user 
*statusp)

  {
  struct vdpa_device *vdpa = v->vdpa;
-    const struct vdpa_config_ops *ops = vdpa->config;
  u8 status;
  -    status = ops->get_status(vdpa);
+    status = vdpa_get_status(vdpa);
Not sure why we need to take cf_mutex here. Appears to me only 
setters (set_status and reset) need to take the lock in this function.



You may be right but it doesn't harm and it is guaranteed to be 
correct if we protect it with mutex here.

Is it more for future proof?



I think so.


Ok, but IMHO it might be better to get some comment here, otherwise 
it's quite confusing why the lock needs to be held. vhost_vdpa had 
done its own serialization in vhost_vdpa_unlocked_ioctl() through 
vhost_dev`mutex.



Right, but they are done for different levels, one is for vhost_dev 
antoher is for vdpa_dev.


Thanks




-Siwei



Thanks





    if (copy_to_user(statusp, &status, sizeof(status)))
  return -EFAULT;
@@ -164,7 +163,7 @@ static long vhost_vdpa_set_status(struct 
vhost_vdpa *v, u8 __user *statusp)

  if (copy_from_user(&status, statusp, sizeof(status)))
  return -EFAULT;
  -    status_old = ops->get_status(vdpa);
+    status_old = vdpa_get_status(vdpa);

Ditto.


    /*
   * Userspace shouldn't remove status bits unless reset the
@@ -182,7 +181,7 @@ static long vhost_vdpa_set_status(struct 
vhost_vdpa *v, u8 __user *statusp)

  if (ret)
  return ret;
  } else
-    ops->set_status(vdpa, status);
+    vdpa_set_status(vdpa, status);
The reset() call in the if branch above needs to take the cf_mutex, 
the same way as that for set_status(). The reset() is effectively 
set_status(vdpa, 0).


Thanks,
-Siwei

    if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && !(status_old & 
VIRTIO_CONFIG_S_DRIVER_OK))

  for (i = 0; i < nvqs; i++)
diff --git a/drivers/virtio/virtio_vdpa.c 
b/drivers/virtio/virtio_vdpa.c

index a84b04ba3195..76504559bc25 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -91,9 +91,8 @@ static u8 virtio_vdpa_get_status(struct 
virtio_device *vdev)
  static void virtio_vdpa_set_status(struct virtio_device *vdev, u8 
status)

  {
  struct vdpa_device *vdpa = vd_get_vdpa(vdev);
-    const struct vdpa_config_ops *ops = vdpa->config;
  -    return ops->set_status(vdpa, status);
+    return vdpa_set_status(vdpa, status);
  }
    static void virtio_vdpa_reset(struct virtio_device *vdev)
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 9cc4291a79b3..ae047fae2603 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -408,6 +408,9 @@ void vdpa_get_config(struct vdpa_device *vdev, 
unsigned int offset,

   void *buf, unsigned int len);
  void vdpa_set_config(struct vdpa_device *dev, unsigned int offset,
   const void *buf, unsigned int length);
+u8 vdpa_get_status(struct vdpa_device *vdev);
+void vdpa_set_status(struct vdpa_device *vdev, u8 status);
+
  /**
   * struct vdpa_mgmtdev_ops - vdpa device ops
   * @dev_add: Add a vdpa device using alloc and register








___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] vdpa/mlx5: fix endian-ness for max vqs

2022-01-09 Thread Jason Wang
On Sun, Jan 9, 2022 at 2:00 AM Michael S. Tsirkin  wrote:
>
> sparse warnings: (new ones prefixed by >>)
> >> drivers/vdpa/mlx5/net/mlx5_vnet.c:1247:23: sparse: sparse: cast to 
> >> restricted __le16
> >> drivers/vdpa/mlx5/net/mlx5_vnet.c:1247:23: sparse: sparse: cast from 
> >> restricted __virtio16
>
> > 1247  num = le16_to_cpu(ndev->config.max_virtqueue_pairs);
>
> Address this using the appropriate wrapper.
>
> Fixes: 7620d51af29a ("vdpa/mlx5: Support configuring max data virtqueue")
> Cc: "Eli Cohen" 
> Reported-by: kernel test robot 
> Signed-off-by: Michael S. Tsirkin 

Acked-by: Jason Wang 

> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 84b1919015ce..d1ff65065fb1 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1242,7 +1242,8 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
> if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_MQ)))
> num = 1;
> else
> -   num = le16_to_cpu(ndev->config.max_virtqueue_pairs);
> +   num = mlx5vdpa16_to_cpu(&ndev->mvdev,
> +   ndev->config.max_virtqueue_pairs);
>
> max_rqt = min_t(int, roundup_pow_of_two(num),
> 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, 
> log_max_rqt_size));
> --
> MST
>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 4/4] drivers/net/virtio_net: Added RSS hash report control.

2022-01-09 Thread Andrew Melnychenko
Now it's possible to control supported hashflows.
Also added hashflow set/get callbacks.
Also, disabling RXH_IP_SRC/DST for TCP would disable then for UDP.
TCP and UDP supports only:
ethtool -U eth0 rx-flow-hash tcp4 sd
RXH_IP_SRC + RXH_IP_DST
ethtool -U eth0 rx-flow-hash tcp4 sdfn
RXH_IP_SRC + RXH_IP_DST + RXH_L4_B_0_1 + RXH_L4_B_2_3

Signed-off-by: Andrew Melnychenko 
---
 drivers/net/virtio_net.c | 159 +++
 1 file changed, 159 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 6e7461b01f87..1b8dd384483c 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -235,6 +235,7 @@ struct virtnet_info {
u8 rss_key_size;
u16 rss_indir_table_size;
u32 rss_hash_types_supported;
+   u32 rss_hash_types_saved;
 
/* Has control virtqueue */
bool has_cvq;
@@ -2275,6 +2276,7 @@ static void virtnet_init_default_rss(struct virtnet_info 
*vi)
int i = 0;
 
vi->ctrl->rss.table_info.hash_types = vi->rss_hash_types_supported;
+   vi->rss_hash_types_saved = vi->rss_hash_types_supported;
vi->ctrl->rss.table_info.indirection_table_mask = 
vi->rss_indir_table_size - 1;
vi->ctrl->rss.table_info.unclassified_queue = 0;
 
@@ -2289,6 +2291,131 @@ static void virtnet_init_default_rss(struct 
virtnet_info *vi)
netdev_rss_key_fill(vi->ctrl->rss.key, vi->rss_key_size);
 }
 
+static void virtnet_get_hashflow(const struct virtnet_info *vi, struct 
ethtool_rxnfc *info)
+{
+   info->data = 0;
+   switch (info->flow_type) {
+   case TCP_V4_FLOW:
+   if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_TCPv4) {
+   info->data = RXH_IP_SRC | RXH_IP_DST |
+RXH_L4_B_0_1 | RXH_L4_B_2_3;
+   } else if (vi->rss_hash_types_saved & 
VIRTIO_NET_RSS_HASH_TYPE_IPv4) {
+   info->data = RXH_IP_SRC | RXH_IP_DST;
+   }
+   break;
+   case TCP_V6_FLOW:
+   if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_TCPv6) {
+   info->data = RXH_IP_SRC | RXH_IP_DST |
+RXH_L4_B_0_1 | RXH_L4_B_2_3;
+   } else if (vi->rss_hash_types_saved & 
VIRTIO_NET_RSS_HASH_TYPE_IPv6) {
+   info->data = RXH_IP_SRC | RXH_IP_DST;
+   }
+   break;
+   case UDP_V4_FLOW:
+   if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_UDPv4) {
+   info->data = RXH_IP_SRC | RXH_IP_DST |
+RXH_L4_B_0_1 | RXH_L4_B_2_3;
+   } else if (vi->rss_hash_types_saved & 
VIRTIO_NET_RSS_HASH_TYPE_IPv4) {
+   info->data = RXH_IP_SRC | RXH_IP_DST;
+   }
+   break;
+   case UDP_V6_FLOW:
+   if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_UDPv6) {
+   info->data = RXH_IP_SRC | RXH_IP_DST |
+RXH_L4_B_0_1 | RXH_L4_B_2_3;
+   } else if (vi->rss_hash_types_saved & 
VIRTIO_NET_RSS_HASH_TYPE_IPv6) {
+   info->data = RXH_IP_SRC | RXH_IP_DST;
+   }
+   break;
+   case IPV4_FLOW:
+   if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4)
+   info->data = RXH_IP_SRC | RXH_IP_DST;
+
+   break;
+   case IPV6_FLOW:
+   if (vi->rss_hash_types_saved & VIRTIO_NET_RSS_HASH_TYPE_IPv4)
+   info->data = RXH_IP_SRC | RXH_IP_DST;
+
+   break;
+   default:
+   info->data = 0;
+   break;
+   }
+}
+
+static bool virtnet_set_hashflow(struct virtnet_info *vi, struct ethtool_rxnfc 
*info)
+{
+   u64 is_iphash = info->data & (RXH_IP_SRC | RXH_IP_DST);
+   u64 is_porthash = info->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3);
+   u32 new_hashtypes = vi->rss_hash_types_saved;
+
+   if ((is_iphash && (is_iphash != (RXH_IP_SRC | RXH_IP_DST))) ||
+   (is_porthash && (is_porthash != (RXH_L4_B_0_1 | RXH_L4_B_2_3 {
+   return false;
+   }
+
+   if (!is_iphash && is_porthash)
+   return false;
+
+   switch (info->flow_type) {
+   case TCP_V4_FLOW:
+   case UDP_V4_FLOW:
+   case IPV4_FLOW:
+   new_hashtypes &= ~VIRTIO_NET_RSS_HASH_TYPE_IPv4;
+   if (is_iphash)
+   new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv4;
+
+   break;
+   case TCP_V6_FLOW:
+   case UDP_V6_FLOW:
+   case IPV6_FLOW:
+   new_hashtypes &= ~VIRTIO_NET_RSS_HASH_TYPE_IPv6;
+   if (is_iphash)
+   new_hashtypes |= VIRTIO_NET_RSS_HASH_TYPE_IPv6;
+
+   break;
+   default:
+   break;
+   }
+
+  

[PATCH 3/4] drivers/net/virtio_net: Added RSS hash report.

2022-01-09 Thread Andrew Melnychenko
Added features for RSS hash report.
If hash is provided - it sets to skb.
Added checks if rss and/or hash are enabled together.

Signed-off-by: Andrew Melnychenko 
---
 drivers/net/virtio_net.c | 56 ++--
 1 file changed, 48 insertions(+), 8 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 21794731fc75..6e7461b01f87 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -231,6 +231,7 @@ struct virtnet_info {
 
/* Host supports rss and/or hash report */
bool has_rss;
+   bool has_rss_hash_report;
u8 rss_key_size;
u16 rss_indir_table_size;
u32 rss_hash_types_supported;
@@ -424,7 +425,9 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
hdr_p = p;
 
hdr_len = vi->hdr_len;
-   if (vi->mergeable_rx_bufs)
+   if (vi->has_rss_hash_report)
+   hdr_padded_len = sizeof(struct virtio_net_hdr_v1_hash);
+   else if (vi->mergeable_rx_bufs)
hdr_padded_len = sizeof(*hdr);
else
hdr_padded_len = sizeof(struct padded_vnet_hdr);
@@ -1160,6 +1163,8 @@ static void receive_buf(struct virtnet_info *vi, struct 
receive_queue *rq,
struct net_device *dev = vi->dev;
struct sk_buff *skb;
struct virtio_net_hdr_mrg_rxbuf *hdr;
+   struct virtio_net_hdr_v1_hash *hdr_hash;
+   enum pkt_hash_types rss_hash_type;
 
if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
pr_debug("%s: short packet %i\n", dev->name, len);
@@ -1186,6 +1191,29 @@ static void receive_buf(struct virtnet_info *vi, struct 
receive_queue *rq,
return;
 
hdr = skb_vnet_hdr(skb);
+   if (dev->features & NETIF_F_RXHASH) {
+   hdr_hash = (struct virtio_net_hdr_v1_hash *)(hdr);
+
+   switch (hdr_hash->hash_report) {
+   case VIRTIO_NET_HASH_REPORT_TCPv4:
+   case VIRTIO_NET_HASH_REPORT_UDPv4:
+   case VIRTIO_NET_HASH_REPORT_TCPv6:
+   case VIRTIO_NET_HASH_REPORT_UDPv6:
+   case VIRTIO_NET_HASH_REPORT_TCPv6_EX:
+   case VIRTIO_NET_HASH_REPORT_UDPv6_EX:
+   rss_hash_type = PKT_HASH_TYPE_L4;
+   break;
+   case VIRTIO_NET_HASH_REPORT_IPv4:
+   case VIRTIO_NET_HASH_REPORT_IPv6:
+   case VIRTIO_NET_HASH_REPORT_IPv6_EX:
+   rss_hash_type = PKT_HASH_TYPE_L3;
+   break;
+   case VIRTIO_NET_HASH_REPORT_NONE:
+   default:
+   rss_hash_type = PKT_HASH_TYPE_NONE;
+   }
+   skb_set_hash(skb, hdr_hash->hash_value, rss_hash_type);
+   }
 
if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID)
skb->ip_summed = CHECKSUM_UNNECESSARY;
@@ -2233,7 +2261,8 @@ static bool virtnet_commit_rss_command(struct 
virtnet_info *vi)
sg_set_buf(&sgs[3], vi->ctrl->rss.key, sg_buf_size);
 
if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ,
- VIRTIO_NET_CTRL_MQ_RSS_CONFIG, sgs)) {
+ vi->has_rss ? VIRTIO_NET_CTRL_MQ_RSS_CONFIG
+ : VIRTIO_NET_CTRL_MQ_HASH_CONFIG, sgs)) {
dev_warn(&dev->dev, "VIRTIONET issue with committing RSS 
sgs\n");
return false;
}
@@ -3220,7 +3249,9 @@ static bool virtnet_validate_features(struct 
virtio_device *vdev)
 VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_MQ, "VIRTIO_NET_F_CTRL_VQ") ||
 VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR,
 "VIRTIO_NET_F_CTRL_VQ") ||
-VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_RSS, "VIRTIO_NET_F_RSS"))) {
+VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_RSS, "VIRTIO_NET_F_RSS") ||
+VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_HASH_REPORT,
+"VIRTIO_NET_F_HASH_REPORT"))) {
return false;
}
 
@@ -3355,6 +3386,12 @@ static int virtnet_probe(struct virtio_device *vdev)
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
vi->mergeable_rx_bufs = true;
 
+   if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) {
+   vi->has_rss_hash_report = true;
+   vi->rss_indir_table_size = 1;
+   vi->rss_key_size = VIRTIO_NET_RSS_MAX_KEY_SIZE;
+   }
+
if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) {
vi->has_rss = true;
vi->rss_indir_table_size =
@@ -3364,7 +3401,7 @@ static int virtnet_probe(struct virtio_device *vdev)
virtio_cread8(vdev, offsetof(struct virtio_net_config, 
rss_max_key_size));
}
 
-   if (vi->has_rss) {
+   if (vi->has_rss || vi->has_rss_hash_report) {
vi->rss_hash_types_supported =
virtio_cread32(vdev, offsetof(struct virtio_net_con

[PATCH 2/4] drivers/net/virtio_net: Added basic RSS support.

2022-01-09 Thread Andrew Melnychenko
Added features for RSS.
Added initialization, RXHASH feature and ethtool ops.
By default RSS/RXHASH is disabled.
Virtio RSS "IPv6 extensions" hashes disabled.
Added ethtools ops to set key and indirection table.

Signed-off-by: Andrew Melnychenko 
---
 drivers/net/virtio_net.c | 194 +--
 1 file changed, 184 insertions(+), 10 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 66439ca488f4..21794731fc75 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -169,6 +169,28 @@ struct receive_queue {
struct xdp_rxq_info xdp_rxq;
 };
 
+/* This structure can contain rss message with maximum settings for 
indirection table and keysize
+ * Note, that default structure that describes RSS configuration 
virtio_net_rss_config
+ * contains same info but can't handle table values.
+ * In any case, structure would be passed to virtio hw through sg_buf split by 
parts
+ * because table sizes may be differ according to the device configuration.
+ */
+#define VIRTIO_NET_RSS_MAX_KEY_SIZE 40
+#define VIRTIO_NET_RSS_MAX_TABLE_LEN128
+struct virtio_net_ctrl_rss {
+   struct {
+   __le32 hash_types;
+   __le16 indirection_table_mask;
+   __le16 unclassified_queue;
+   } __packed table_info;
+   u16 indirection_table[VIRTIO_NET_RSS_MAX_TABLE_LEN];
+   struct {
+   u16 max_tx_vq; /* queues */
+   u8 hash_key_length;
+   } __packed key_info;
+   u8 key[VIRTIO_NET_RSS_MAX_KEY_SIZE];
+};
+
 /* Control VQ buffers: protected by the rtnl lock */
 struct control_buf {
struct virtio_net_ctrl_hdr hdr;
@@ -178,6 +200,7 @@ struct control_buf {
u8 allmulti;
__virtio16 vid;
__virtio64 offloads;
+   struct virtio_net_ctrl_rss rss;
 };
 
 struct virtnet_info {
@@ -206,6 +229,12 @@ struct virtnet_info {
/* Host will merge rx buffers for big packets (shake it! shake it!) */
bool mergeable_rx_bufs;
 
+   /* Host supports rss and/or hash report */
+   bool has_rss;
+   u8 rss_key_size;
+   u16 rss_indir_table_size;
+   u32 rss_hash_types_supported;
+
/* Has control virtqueue */
bool has_cvq;
 
@@ -395,9 +424,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
hdr_p = p;
 
hdr_len = vi->hdr_len;
-   if (vi->has_rss_hash_report)
-   hdr_padded_len = sizeof(struct virtio_net_hdr_v1_hash);
-   else if (vi->mergeable_rx_bufs)
+   if (vi->mergeable_rx_bufs)
hdr_padded_len = sizeof(*hdr);
else
hdr_padded_len = sizeof(struct padded_vnet_hdr);
@@ -2184,6 +2211,55 @@ static void virtnet_get_ringparam(struct net_device *dev,
ring->tx_pending = ring->tx_max_pending;
 }
 
+static bool virtnet_commit_rss_command(struct virtnet_info *vi)
+{
+   struct net_device *dev = vi->dev;
+   struct scatterlist sgs[4];
+   unsigned int sg_buf_size;
+
+   /* prepare sgs */
+   sg_init_table(sgs, 4);
+
+   sg_buf_size = sizeof(vi->ctrl->rss.table_info);
+   sg_set_buf(&sgs[0], &vi->ctrl->rss.table_info, sg_buf_size);
+
+   sg_buf_size = sizeof(uint16_t) * vi->rss_indir_table_size;
+   sg_set_buf(&sgs[1], vi->ctrl->rss.indirection_table, sg_buf_size);
+
+   sg_buf_size = sizeof(vi->ctrl->rss.key_info);
+   sg_set_buf(&sgs[2], &vi->ctrl->rss.key_info, sg_buf_size);
+
+   sg_buf_size = vi->rss_key_size;
+   sg_set_buf(&sgs[3], vi->ctrl->rss.key, sg_buf_size);
+
+   if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ,
+ VIRTIO_NET_CTRL_MQ_RSS_CONFIG, sgs)) {
+   dev_warn(&dev->dev, "VIRTIONET issue with committing RSS 
sgs\n");
+   return false;
+   }
+   return true;
+}
+
+static void virtnet_init_default_rss(struct virtnet_info *vi)
+{
+   u32 indir_val = 0;
+   int i = 0;
+
+   vi->ctrl->rss.table_info.hash_types = vi->rss_hash_types_supported;
+   vi->ctrl->rss.table_info.indirection_table_mask = 
vi->rss_indir_table_size - 1;
+   vi->ctrl->rss.table_info.unclassified_queue = 0;
+
+   for (; i < vi->rss_indir_table_size; ++i) {
+   indir_val = ethtool_rxfh_indir_default(i, vi->max_queue_pairs);
+   vi->ctrl->rss.indirection_table[i] = indir_val;
+   }
+
+   vi->ctrl->rss.key_info.max_tx_vq = vi->curr_queue_pairs;
+   vi->ctrl->rss.key_info.hash_key_length = vi->rss_key_size;
+
+   netdev_rss_key_fill(vi->ctrl->rss.key, vi->rss_key_size);
+}
+
 
 static void virtnet_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
@@ -2412,6 +2488,71 @@ static void virtnet_update_settings(struct virtnet_info 
*vi)
vi->duplex = duplex;
 }
 
+static u32 virtnet_get_rxfh_key_size(struct net_device *dev)
+{
+   return ((struct virtnet_info *)netdev_priv(dev))->rss_key_size;
+}
+
+sta

[PATCH 1/4] drivers/net/virtio_net: Fixed padded vheader to use v1 with hash.

2022-01-09 Thread Andrew Melnychenko
The header v1 provides additional info about RSS.
Added changes to computing proper header length.
In the next patches, the header may contain RSS hash info
for the hash population.

Signed-off-by: Andrew Melnychenko 
---
 drivers/net/virtio_net.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index b107835242ad..66439ca488f4 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -242,13 +242,13 @@ struct virtnet_info {
 };
 
 struct padded_vnet_hdr {
-   struct virtio_net_hdr_mrg_rxbuf hdr;
+   struct virtio_net_hdr_v1_hash hdr;
/*
 * hdr is in a separate sg buffer, and data sg buffer shares same page
 * with this header sg. This padding makes next sg 16 byte aligned
 * after the header.
 */
-   char padding[4];
+   char padding[12];
 };
 
 static bool is_xdp_frame(void *ptr)
@@ -395,7 +395,9 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
hdr_p = p;
 
hdr_len = vi->hdr_len;
-   if (vi->mergeable_rx_bufs)
+   if (vi->has_rss_hash_report)
+   hdr_padded_len = sizeof(struct virtio_net_hdr_v1_hash);
+   else if (vi->mergeable_rx_bufs)
hdr_padded_len = sizeof(*hdr);
else
hdr_padded_len = sizeof(struct padded_vnet_hdr);
@@ -1266,7 +1268,8 @@ static unsigned int get_mergeable_buf_len(struct 
receive_queue *rq,
  struct ewma_pkt_len *avg_pkt_len,
  unsigned int room)
 {
-   const size_t hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+   struct virtnet_info *vi = rq->vq->vdev->priv;
+   const size_t hdr_len = vi->hdr_len;
unsigned int len;
 
if (room)
@@ -2849,7 +2852,7 @@ static void virtnet_del_vqs(struct virtnet_info *vi)
  */
 static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct 
virtqueue *vq)
 {
-   const unsigned int hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+   const unsigned int hdr_len = vi->hdr_len;
unsigned int rq_size = virtqueue_get_vring_size(vq);
unsigned int packet_len = vi->big_packets ? IP_MAX_MTU : 
vi->dev->max_mtu;
unsigned int buf_len = hdr_len + ETH_HLEN + VLAN_HLEN + packet_len;
-- 
2.34.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 0/4] RSS support for VirtioNet.

2022-01-09 Thread Andrew Melnychenko
Virtio-net supports "hardware" RSS with toeplitz key.
Also, it allows receiving calculated hash in vheader
that may be used with RPS.
Added ethtools callbacks to manipulate RSS.

Technically hash calculation may be set only for
SRC+DST and SRC+DST+PORTSRC+PORTDST hashflows.
The completely disabling hash calculation for TCP or UDP
would disable hash calculation for IP.

RSS/RXHASH is disabled by default.

Changes since rfc:
* code refactored
* patches reformatted
* added feature validation

Andrew Melnychenko (4):
  drivers/net/virtio_net: Fixed padded vheader to use v1 with hash.
  drivers/net/virtio_net: Added basic RSS support.
  drivers/net/virtio_net: Added RSS hash report.
  drivers/net/virtio_net: Added RSS hash report control.

 drivers/net/virtio_net.c | 404 +--
 1 file changed, 390 insertions(+), 14 deletions(-)

-- 
2.34.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [mst-vhost:vhost 30/44] drivers/vdpa/mlx5/net/mlx5_vnet.c:1247:23: sparse: sparse: cast to restricted __le16

2022-01-09 Thread Michael S. Tsirkin
On Sat, Jan 08, 2022 at 10:48:34PM +0800, kernel test robot wrote:
> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git vhost
> head:   008842b2060c14544ff452483ffd2241d145c7b2
> commit: 7620d51af29aa1c5d32150db2ac4b6187ef8af3a [30/44] vdpa/mlx5: Support 
> configuring max data virtqueue
> config: powerpc-allmodconfig 
> (https://download.01.org/0day-ci/archive/20220108/202201082258.akrhnajx-...@intel.com/config)
> compiler: powerpc-linux-gcc (GCC) 11.2.0
> reproduce:
> wget 
> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
> ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # apt-get install sparse
> # sparse version: v0.6.4-dirty
> # 
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/commit/?id=7620d51af29aa1c5d32150db2ac4b6187ef8af3a
> git remote add mst-vhost 
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
> git fetch --no-tags mst-vhost vhost
> git checkout 7620d51af29aa1c5d32150db2ac4b6187ef8af3a
> # save the config file to linux build tree
> mkdir build_dir
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 
> CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=powerpc 
> SHELL=/bin/bash drivers/vdpa/mlx5/
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot 
> 
> 
> sparse warnings: (new ones prefixed by >>)
> >> drivers/vdpa/mlx5/net/mlx5_vnet.c:1247:23: sparse: sparse: cast to 
> >> restricted __le16
> >> drivers/vdpa/mlx5/net/mlx5_vnet.c:1247:23: sparse: sparse: cast from 
> >> restricted __virtio16
> 
> vim +1247 drivers/vdpa/mlx5/net/mlx5_vnet.c

Eli? I sent a patch for this - ack?

>   1232
>   1233static int create_rqt(struct mlx5_vdpa_net *ndev)
>   1234{
>   1235__be32 *list;
>   1236int max_rqt;
>   1237void *rqtc;
>   1238int inlen;
>   1239void *in;
>   1240int i, j;
>   1241int err;
>   1242int num;
>   1243
>   1244if (!(ndev->mvdev.actual_features & 
> BIT_ULL(VIRTIO_NET_F_MQ)))
>   1245num = 1;
>   1246else
> > 1247num = 
> > le16_to_cpu(ndev->config.max_virtqueue_pairs);
>   1248
>   1249max_rqt = min_t(int, roundup_pow_of_two(num),
>   12501 << MLX5_CAP_GEN(ndev->mvdev.mdev, 
> log_max_rqt_size));
>   1251if (max_rqt < 1)
>   1252return -EOPNOTSUPP;
>   1253
>   1254inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + max_rqt * 
> MLX5_ST_SZ_BYTES(rq_num);
>   1255in = kzalloc(inlen, GFP_KERNEL);
>   1256if (!in)
>   1257return -ENOMEM;
>   1258
>   1259MLX5_SET(create_rqt_in, in, uid, ndev->mvdev.res.uid);
>   1260rqtc = MLX5_ADDR_OF(create_rqt_in, in, rqt_context);
>   1261
>   1262MLX5_SET(rqtc, rqtc, list_q_type, 
> MLX5_RQTC_LIST_Q_TYPE_VIRTIO_NET_Q);
>   1263MLX5_SET(rqtc, rqtc, rqt_max_size, max_rqt);
>   1264list = MLX5_ADDR_OF(rqtc, rqtc, rq_num[0]);
>   1265for (i = 0, j = 0; i < max_rqt; i++, j += 2)
>   1266list[i] = cpu_to_be32(ndev->vqs[j % (2 * 
> num)].virtq_id);
>   1267
>   1268MLX5_SET(rqtc, rqtc, rqt_actual_size, max_rqt);
>   1269err = mlx5_vdpa_create_rqt(&ndev->mvdev, in, inlen, 
> &ndev->res.rqtn);
>   1270kfree(in);
>   1271if (err)
>   1272return err;
>   1273
>   1274return 0;
>   1275}
>   1276
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization