Re: [PATCH net-next] xsk: introduce xsk_dma_ops

2023-04-24 Thread Xuan Zhuo
On Mon, 24 Apr 2023 17:28:01 +0200, Alexander Lobakin 
 wrote:
> From: Jakub Kicinski 
> Date: Fri, 21 Apr 2023 06:50:59 -0700
>
> > On Fri, 21 Apr 2023 15:31:04 +0800 Xuan Zhuo wrote:
> >> I am not particularly familiar with dma-bufs. I want to know if this 
> >> mechanism
> >> can solve the problem of virtio-net.
> >>
> >> I saw this framework, allowing the driver do something inside the ops of
> >> dma-bufs.
> >>
> >> If so, is it possible to propose a new patch based on dma-bufs?
> >
> > I haven't looked in detail, maybe Olek has? AFAIU you'd need to rework
>
> Oh no, not me. I suck at dma-bufs, tried to understand them several
> times with no progress :D My knowledge is limited to "ok, if it's
> DMA + userspace, then it's likely dma-buf" :smile_with_tear:
>
> > uAPI of XSK to allow user to pass in a dma-buf region rather than just
> > a user VA. So it may be a larger effort but architecturally it may be
> > the right solution.
> >
>
> I'm curious whether this could be done without tons of work. Switching
> Page Pool to dma_alloc_noncoherent() is simpler :D But, as I wrote
> above, we need to extend DMA API first to provide bulk allocations and
> NUMA-aware allocations.
> Can't we provide a shim for back-compat, i.e. if a program passes just a
> user VA, create a dma-buf in the kernel already?


Yes

I think so too. If this is the case, will the workload be much smaller? Let me
try it.

Thanks.


>
> Thanks,
> Olek
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 0/3] vhost_vdpa: better PACKED support

2023-04-24 Thread Shannon Nelson via Virtualization
While testing our vDPA driver with QEMU we found that vhost_vdpa was
missing some support for PACKED VQs.  Adding these helped us get further
in our testing.  The first patch makes sure that the vhost_vdpa vqs are
given the feature flags, as done in other vhost variants.  The second
and third patches use the feature flags to properly format and pass
set/get_vq requests.

v1:
 - included wrap counter in packing answers for VHOST_GET_VRING_BASE
 - added comments to vhost_virtqueue fields last_avail_idx and last_used_idx

v0:
   
https://lists.linuxfoundation.org/pipermail/virtualization/2023-April/066253.html
 - initial version

Shannon Nelson (3):
  vhost_vdpa: tell vqs about the negotiated
  vhost: support PACKED when setting-getting vring_base
  vhost_vdpa: support PACKED when setting-getting vring_base

 drivers/vhost/vdpa.c  | 34 ++
 drivers/vhost/vhost.c | 18 +-
 drivers/vhost/vhost.h |  8 ++--
 3 files changed, 49 insertions(+), 11 deletions(-)

-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 1/3] vhost_vdpa: tell vqs about the negotiated

2023-04-24 Thread Shannon Nelson via Virtualization
As is done in the net, iscsi, and vsock vhost support, let the vdpa vqs
know about the features that have been negotiated.  This allows vhost
to more safely make decisions based on the features, such as when using
PACKED vs split queues.

Signed-off-by: Shannon Nelson 
Acked-by: Jason Wang 
---
 drivers/vhost/vdpa.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 7be9d9d8f01c..599b8cc238c7 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -385,7 +385,10 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, 
u64 __user *featurep)
 {
struct vdpa_device *vdpa = v->vdpa;
const struct vdpa_config_ops *ops = vdpa->config;
+   struct vhost_dev *d = >vdev;
+   u64 actual_features;
u64 features;
+   int i;
 
/*
 * It's not allowed to change the features after they have
@@ -400,6 +403,16 @@ static long vhost_vdpa_set_features(struct vhost_vdpa *v, 
u64 __user *featurep)
if (vdpa_set_features(vdpa, features))
return -EINVAL;
 
+   /* let the vqs know what has been configured */
+   actual_features = ops->get_driver_features(vdpa);
+   for (i = 0; i < d->nvqs; ++i) {
+   struct vhost_virtqueue *vq = d->vqs[i];
+
+   mutex_lock(>mutex);
+   vq->acked_features = actual_features;
+   mutex_unlock(>mutex);
+   }
+
return 0;
 }
 
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 3/3] vhost_vdpa: support PACKED when setting-getting vring_base

2023-04-24 Thread Shannon Nelson via Virtualization
Use the right structs for PACKED or split vqs when setting and
getting the vring base.

Signed-off-by: Shannon Nelson 
---
 drivers/vhost/vdpa.c | 21 +
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 599b8cc238c7..fe392b67d5be 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -585,7 +585,14 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, 
unsigned int cmd,
if (r)
return r;
 
-   vq->last_avail_idx = vq_state.split.avail_index;
+   if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
+   vq->last_avail_idx = vq_state.packed.last_avail_idx |
+
(vq_state.packed.last_avail_counter << 15);
+   vq->last_used_idx = vq_state.packed.last_used_idx |
+   (vq_state.packed.last_used_counter 
<< 15);
+   } else {
+   vq->last_avail_idx = vq_state.split.avail_index;
+   }
break;
}
 
@@ -603,9 +610,15 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, 
unsigned int cmd,
break;
 
case VHOST_SET_VRING_BASE:
-   vq_state.split.avail_index = vq->last_avail_idx;
-   if (ops->set_vq_state(vdpa, idx, _state))
-   r = -EINVAL;
+   if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
+   vq_state.packed.last_avail_idx = vq->last_avail_idx & 
0x7fff;
+   vq_state.packed.last_avail_counter = 
!!(vq->last_avail_idx & 0x8000);
+   vq_state.packed.last_used_idx = vq->last_used_idx & 
0x7fff;
+   vq_state.packed.last_used_counter = 
!!(vq->last_used_idx & 0x8000);
+   } else {
+   vq_state.split.avail_index = vq->last_avail_idx;
+   }
+   r = ops->set_vq_state(vdpa, idx, _state);
break;
 
case VHOST_SET_VRING_CALL:
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 2/3] vhost: support PACKED when setting-getting vring_base

2023-04-24 Thread Shannon Nelson via Virtualization
Use the right structs for PACKED or split vqs when setting and
getting the vring base.

Signed-off-by: Shannon Nelson 
---
 drivers/vhost/vhost.c | 18 +-
 drivers/vhost/vhost.h |  8 ++--
 2 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index f11bdbe4c2c5..f64efda48f21 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1633,17 +1633,25 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned 
int ioctl, void __user *arg
r = -EFAULT;
break;
}
-   if (s.num > 0x) {
-   r = -EINVAL;
-   break;
+   if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
+   vq->last_avail_idx = s.num & 0x;
+   vq->last_used_idx = (s.num >> 16) & 0x;
+   } else {
+   if (s.num > 0x) {
+   r = -EINVAL;
+   break;
+   }
+   vq->last_avail_idx = s.num;
}
-   vq->last_avail_idx = s.num;
/* Forget the cached index value. */
vq->avail_idx = vq->last_avail_idx;
break;
case VHOST_GET_VRING_BASE:
s.index = idx;
-   s.num = vq->last_avail_idx;
+   if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED))
+   s.num = (u32)vq->last_avail_idx | 
((u32)vq->last_used_idx << 16);
+   else
+   s.num = vq->last_avail_idx;
if (copy_to_user(argp, , sizeof s))
r = -EFAULT;
break;
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 1647b750169c..6f73f29d5979 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -85,13 +85,17 @@ struct vhost_virtqueue {
/* The routine to call when the Guest pings us, or timeout. */
vhost_work_fn_t handle_kick;
 
-   /* Last available index we saw. */
+   /* Last available index we saw.
+* Values are limited to 0x7fff, and the high bit is used as
+* a wrap counter when using VIRTIO_F_RING_PACKED. */
u16 last_avail_idx;
 
/* Caches available index value from user. */
u16 avail_idx;
 
-   /* Last index we used. */
+   /* Last index we used.
+* Values are limited to 0x7fff, and the high bit is used as
+* a wrap counter when using VIRTIO_F_RING_PACKED. */
u16 last_used_idx;
 
/* Used flags */
-- 
2.17.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/3] vhost_vdpa: support PACKED when setting-getting vring_base

2023-04-24 Thread Shannon Nelson via Virtualization

On 4/22/23 11:40 PM, Jason Wang wrote:

On Sat, Apr 22, 2023 at 3:57 AM Shannon Nelson  wrote:


Use the right structs for PACKED or split vqs when setting and
getting the vring base.

Signed-off-by: Shannon Nelson 
---
  drivers/vhost/vdpa.c | 19 +++
  1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 599b8cc238c7..2543ae9d5939 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -585,7 +585,12 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, 
unsigned int cmd,
 if (r)
 return r;

-   vq->last_avail_idx = vq_state.split.avail_index;
+   if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
+   vq->last_avail_idx = vq_state.packed.last_avail_idx;
+   vq->last_used_idx = vq_state.packed.last_used_idx;


I wonder if the compiler will guarantee a piggyback for the wrap
counter here? If not, should we explicitly packing wrap counter?


Yes, good catch, I'll add the wrap counter.

sln
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[GIT PULL] virtio,vhost,vdpa: features, fixes, cleanups

2023-04-24 Thread Michael S. Tsirkin
Most exciting stuff this time around has to do with performance.

The following changes since commit 6a8f57ae2eb07ab39a6f0ccad60c760743051026:

  Linux 6.3-rc7 (2023-04-16 15:23:53 -0700)

are available in the Git repository at:

  https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus

for you to fetch changes up to c82729e06644f4e087f5ff0f91b8fb15e03b8890:

  vhost_vdpa: fix unmap process in no-batch mode (2023-04-21 03:02:36 -0400)


virtio,vhost,vdpa: features, fixes, cleanups

reduction in interrupt rate in virtio
perf improvement for VDUSE
scalability for vhost-scsi
non power of 2 ring support for packed rings
better management for mlx5 vdpa
suspend for snet
VIRTIO_F_NOTIFICATION_DATA
shared backend with vdpa-sim-blk
user VA support in vdpa-sim
better struct packing for virtio

fixes, cleanups all over the place

Signed-off-by: Michael S. Tsirkin 


Albert Huang (1):
  virtio_ring: don't update event idx on get_buf

Alvaro Karsz (5):
  vdpa/snet: support getting and setting VQ state
  vdpa/snet: support the suspend vDPA callback
  virtio-vdpa: add VIRTIO_F_NOTIFICATION_DATA feature support
  vdpa/snet: implement kick_vq_with_data callback
  vdpa/snet: use likely/unlikely macros in hot functions

Christophe JAILLET (1):
  virtio: Reorder fields in 'struct virtqueue'

Cindy Lu (1):
  vhost_vdpa: fix unmap process in no-batch mode

Eli Cohen (3):
  vdpa/mlx5: Avoid losing link state updates
  vdpa/mlx5: Make VIRTIO_NET_F_MRG_RXBUF off by default
  vdpa/mlx5: Extend driver support for new features

Feng Liu (3):
  virtio_ring: Avoid using inline for small functions
  virtio_ring: Use const to annotate read-only pointer params
  virtio_ring: Allow non power of 2 sizes for packed virtqueue

Jacob Keller (1):
  vhost: use struct_size and size_add to compute flex array sizes

Mike Christie (5):
  vhost-scsi: Delay releasing our refcount on the tpg
  vhost-scsi: Drop device mutex use in vhost_scsi_do_plug
  vhost-scsi: Check for a cleared backend before queueing an event
  vhost-scsi: Drop vhost_scsi_mutex use in port callouts
  vhost-scsi: Reduce vhost_scsi_mutex use

Rong Tao (2):
  tools/virtio: virtio_test: Fix indentation
  tools/virtio: virtio_test -h,--help should return directly

Shunsuke Mie (2):
  virtio_ring: add a struct device forward declaration
  tools/virtio: fix build caused by virtio_ring changes

Simon Horman (3):
  vdpa: address kdoc warnings
  vringh: address kdoc warnings
  MAINTAINERS: add vringh.h to Virtio Core and Net Drivers

Stefano Garzarella (12):
  vringh: fix typos in the vringh_init_* documentation
  vdpa: add bind_mm/unbind_mm callbacks
  vhost-vdpa: use bind_mm/unbind_mm device callbacks
  vringh: replace kmap_atomic() with kmap_local_page()
  vringh: define the stride used for translation
  vringh: support VA with iotlb
  vdpa_sim: make devices agnostic for work management
  vdpa_sim: use kthread worker
  vdpa_sim: replace the spinlock with a mutex to protect the state
  vdpa_sim: add support for user VA
  vdpa_sim: move buffer allocation in the devices
  vdpa_sim_blk: support shared backend

Viktor Prutyanov (1):
  virtio: add VIRTIO_F_NOTIFICATION_DATA feature support

Xie Yongji (11):
  lib/group_cpus: Export group_cpus_evenly()
  vdpa: Add set/get_vq_affinity callbacks in vdpa_config_ops
  virtio-vdpa: Support interrupt affinity spreading mechanism
  vduse: Refactor allocation for vduse virtqueues
  vduse: Support set_vq_affinity callback
  vduse: Support get_vq_affinity callback
  vduse: Add sysfs interface for irq callback affinity
  vdpa: Add eventfd for the vdpa callback
  vduse: Signal vq trigger eventfd directly if possible
  vduse: Delay iova domain creation
  vduse: Support specifying bounce buffer size via sysfs

Xuan Zhuo (1):
  MAINTAINERS: make me a reviewer of VIRTIO CORE AND NET DRIVERS

 MAINTAINERS  |   2 +
 drivers/s390/virtio/virtio_ccw.c |  22 +-
 drivers/vdpa/mlx5/net/mlx5_vnet.c| 261 +-
 drivers/vdpa/solidrun/Makefile   |   1 +
 drivers/vdpa/solidrun/snet_ctrl.c| 330 
 drivers/vdpa/solidrun/snet_hwmon.c   |   2 +-
 drivers/vdpa/solidrun/snet_main.c| 146 ++--
 drivers/vdpa/solidrun/snet_vdpa.h|  20 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 166 +++---
 drivers/vdpa/vdpa_sim/vdpa_sim.h |  14 +-
 drivers/vdpa/vdpa_sim/vdpa_sim_blk.c |  93 ++--
 drivers/vdpa/vdpa_sim/vdpa_sim_net.c |  38 ++--
 drivers/vdpa/vdpa_user/vduse_dev.c   | 414 +--
 drivers/vhost/scsi.c | 102 +
 drivers/vhost/vdpa.c |  44 +++-
 

Re: [PATCH 2/3] vhost: support PACKED when setting-getting vring_base

2023-04-24 Thread Shannon Nelson via Virtualization

On 4/22/23 11:36 PM, Jason Wang wrote:

On Sat, Apr 22, 2023 at 3:57 AM Shannon Nelson  wrote:


Use the right structs for PACKED or split vqs when setting and
getting the vring base.

Signed-off-by: Shannon Nelson 
---
  drivers/vhost/vhost.c | 18 +-
  1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index f11bdbe4c2c5..f64efda48f21 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1633,17 +1633,25 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned 
int ioctl, void __user *arg
 r = -EFAULT;
 break;
 }
-   if (s.num > 0x) {
-   r = -EINVAL;
-   break;
+   if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
+   vq->last_avail_idx = s.num & 0x;
+   vq->last_used_idx = (s.num >> 16) & 0x;


I think we need to tweak the comment around last_avail_idx and last_used_idx:

 /* Last available index we saw. */
 u16 last_avail_idx;

 /* Last index we used. */
 u16 last_used_idx;

To describe that it contains wrap counters as well in the case of
packed virtqueue 


Sure, I can add into the comments that these counters are limited to 
0x7fff and the high bits are used for a wrap counter/flag.



or maybe it's time to rename them (since they are
invented for split virtqueue).


Should we change them to bitfields as in struct vdpa_vq_state_packed?
Or perhaps just add new fields for bool/u16/u8 last_avail_counter and 
last_used_counter?


That might be a later patch in order to also deal with whatever fallout 
happens from a new name.


sln

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [GIT PULL] fork: user workers & vhost

2023-04-24 Thread pr-tracker-bot
The pull request you sent on Fri, 21 Apr 2023 15:37:12 +0200:

> g...@gitolite.kernel.org:pub/scm/linux/kernel/git/brauner/linux 
> tags/v6.4/kernel.user_worker

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/3323ddce085cdb1c2c1bb7a88233023566a9

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] can: virtio-can: cleanups

2023-04-24 Thread Michael S. Tsirkin
On Mon, Apr 24, 2023 at 09:47:58PM +0200, Marc Kleine-Budde wrote:
> Address the topics raised in
> 
> https://lore.kernel.org/20230424-footwear-daily-9339bd0ec428-...@pengutronix.de
> 
> Signed-off-by: Marc Kleine-Budde 

given base patch is rfc this should be too?

> ---
>  drivers/net/can/Makefile|  4 +--
>  drivers/net/can/virtio_can.c| 56 ++---
>  include/uapi/linux/virtio_can.h |  4 +--
>  3 files changed, 28 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/net/can/Makefile b/drivers/net/can/Makefile
> index e409f61d8e93..19314adaff59 100644
> --- a/drivers/net/can/Makefile
> +++ b/drivers/net/can/Makefile
> @@ -17,8 +17,8 @@ obj-$(CONFIG_CAN_AT91)  += at91_can.o
>  obj-$(CONFIG_CAN_BXCAN)  += bxcan.o
>  obj-$(CONFIG_CAN_CAN327) += can327.o
>  obj-$(CONFIG_CAN_CC770)  += cc770/
> -obj-$(CONFIG_CAN_C_CAN)  += c_can/
>  obj-$(CONFIG_CAN_CTUCANFD)   += ctucanfd/
> +obj-$(CONFIG_CAN_C_CAN)  += c_can/
>  obj-$(CONFIG_CAN_FLEXCAN)+= flexcan/
>  obj-$(CONFIG_CAN_GRCAN)  += grcan.o
>  obj-$(CONFIG_CAN_IFI_CANFD)  += ifi_canfd/
> @@ -30,7 +30,7 @@ obj-$(CONFIG_CAN_PEAK_PCIEFD)   += peak_canfd/
>  obj-$(CONFIG_CAN_SJA1000)+= sja1000/
>  obj-$(CONFIG_CAN_SUN4I)  += sun4i_can.o
>  obj-$(CONFIG_CAN_TI_HECC)+= ti_hecc.o
> -obj-$(CONFIG_CAN_XILINXCAN)  += xilinx_can.o
>  obj-$(CONFIG_CAN_VIRTIO_CAN) += virtio_can.o
> +obj-$(CONFIG_CAN_XILINXCAN)  += xilinx_can.o
>  
>  subdir-ccflags-$(CONFIG_CAN_DEBUG_DEVICES) += -DDEBUG
> diff --git a/drivers/net/can/virtio_can.c b/drivers/net/can/virtio_can.c
> index 23f9c1b6446d..c11a652613d0 100644
> --- a/drivers/net/can/virtio_can.c
> +++ b/drivers/net/can/virtio_can.c
> @@ -312,13 +312,12 @@ static netdev_tx_t virtio_can_start_xmit(struct sk_buff 
> *skb,
>   struct scatterlist sg_in[1];
>   struct scatterlist *sgs[2];
>   unsigned long flags;
> - size_t len;
>   u32 can_flags;
>   int err;
>   netdev_tx_t xmit_ret = NETDEV_TX_OK;
>   const unsigned int hdr_size = offsetof(struct virtio_can_tx_out, sdu);
>  
> - if (can_dropped_invalid_skb(dev, skb))
> + if (can_dev_dropped_skb(dev, skb))
>   goto kick; /* No way to return NET_XMIT_DROP here */
>  
>   /* Virtio CAN does not support error message frames */
> @@ -338,27 +337,25 @@ static netdev_tx_t virtio_can_start_xmit(struct sk_buff 
> *skb,
>  
>   can_tx_msg->tx_out.msg_type = cpu_to_le16(VIRTIO_CAN_TX);
>   can_flags = 0;
> - if (cf->can_id & CAN_EFF_FLAG)
> +
> + if (cf->can_id & CAN_EFF_FLAG) {
>   can_flags |= VIRTIO_CAN_FLAGS_EXTENDED;
> + can_tx_msg->tx_out.can_id = cpu_to_le32(cf->can_id & 
> CAN_EFF_MASK);
> + } else {
> + can_tx_msg->tx_out.can_id = cpu_to_le32(cf->can_id & 
> CAN_SFF_MASK);
> + }
>   if (cf->can_id & CAN_RTR_FLAG)
>   can_flags |= VIRTIO_CAN_FLAGS_RTR;
> + else
> + memcpy(can_tx_msg->tx_out.sdu, cf->data, cf->len);
>   if (can_is_canfd_skb(skb))
>   can_flags |= VIRTIO_CAN_FLAGS_FD;
> +
>   can_tx_msg->tx_out.flags = cpu_to_le32(can_flags);
> - can_tx_msg->tx_out.can_id = cpu_to_le32(cf->can_id & CAN_EFF_MASK);
> - len = cf->len;
> - can_tx_msg->tx_out.length = len;
> - if (len > sizeof(cf->data))
> - len = sizeof(cf->data);
> - if (len > sizeof(can_tx_msg->tx_out.sdu))
> - len = sizeof(can_tx_msg->tx_out.sdu);
> - if (!(can_flags & VIRTIO_CAN_FLAGS_RTR)) {
> - /* Copy if not a RTR frame. RTR frames have a DLC but no 
> payload */
> - memcpy(can_tx_msg->tx_out.sdu, cf->data, len);
> - }
> + can_tx_msg->tx_out.length = cpu_to_le16(cf->len);
>  
>   /* Prepare sending of virtio message */
> - sg_init_one(_out[0], _tx_msg->tx_out, hdr_size + len);
> + sg_init_one(_out[0], _tx_msg->tx_out, hdr_size + cf->len);
>   sg_init_one(_in[0], _tx_msg->tx_in, sizeof(can_tx_msg->tx_in));
>   sgs[0] = sg_out;
>   sgs[1] = sg_in;
> @@ -895,8 +892,8 @@ static int virtio_can_probe(struct virtio_device *vdev)
>   priv->tx_putidx_list =
>   kcalloc(echo_skb_max, sizeof(struct list_head), GFP_KERNEL);
>   if (!priv->tx_putidx_list) {
> - free_candev(dev);
> - return -ENOMEM;
> + err = -ENOMEM;
> + goto on_failure;
>   }
>  
>   INIT_LIST_HEAD(>tx_putidx_free);
> @@ -9

Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver

2023-04-24 Thread Gurchetan Singh
On Wed, Apr 19, 2023 at 2:22 PM Dmitry Osipenko
 wrote:
>
> Hello Gurchetan,
>
> On 4/18/23 02:17, Gurchetan Singh wrote:
> > On Sun, Apr 16, 2023 at 4:53 AM Dmitry Osipenko <
> > dmitry.osipe...@collabora.com> wrote:
> >
> >> We have multiple Vulkan context types that are awaiting for the addition
> >> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> >>
> >>  1. Venus context
> >>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> >>
> >> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> >> generic fencing implementation that we want to utilize.
> >>
> >> This patch adds initial sync objects support. It creates fundament for a
> >> further fencing improvements. Later on we will want to extend the
> >> VirtIO-GPU
> >> fencing API with passing fence IDs to host for waiting, it will be a new
> >> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
> >> context
> >> drivers in works that require VirtIO-GPU to support sync objects UAPI.
> >>
> >> The patch is heavily inspired by the sync object UAPI implementation of the
> >> MSM driver.
> >>
> >
> > The changes seem good, but I would recommend getting a full end-to-end
> > solution (i.e, you've proxied the host fence with these changes and shared
> > with the host compositor) working first.  You'll never know what you'll
> > find after completing this exercise.  Or is that the plan already?
> >
> > Typically, you want to land the uAPI and virtio spec changes last.
> > Mesa/gfxstream/virglrenderer/crosvm all have the ability to test out
> > unstable uAPIs ...
>
> The proxied host fence isn't directly related to sync objects, though I
> prepared code such that it could be extended with a proxied fence later
> on, based on a prototype that was made some time ago.

Proxying the host fence is the novel bit.  If you have code that does
this, you should rebase/send that out (even as an RFC) so it's easier
to see how the pieces fit.

Right now, if you've only tested synchronization objects between the
same virtio-gpu context that skips the guest side wait, I think you
can already do that with the current uAPI (since ideally you'd wait on
the host side and can encode the sync resource in the command stream).

Also, try to come with a simple test (so we can meet requirements here
[a]) that showcases the new feature/capability.  An example would be
the virtio-intel native context sharing a fence with KMS or even
Wayland.

[a] 
https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements

>
> The proxied host fence shouldn't require UAPI changes, but only
> virtio-gpu proto extension. Normally, all in-fences belong to a job's
> context, and thus, waits are skipped by the guest kernel. Hence, fence
> proxying is a separate feature from sync objects, it can be added
> without sync objects.
>
> Sync objects primarily wanted by native context drivers because Mesa
> relies on the sync object UAPI presence. It's one of direct blockers for
> Intel and AMDGPU drivers, both of which has been using this sync object
> UAPI for a few months and now wanting it to land upstream.
>
> --
> Best regards,
> Dmitry
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [EXT] [RFC PATCH 0/3] Introduce a PCIe endpoint virtio console

2023-04-24 Thread Shunsuke Mie
2023年4月19日(水) 1:42 Frank Li :
>
>
>
> > -Original Message-
> > From: Shunsuke Mie 
> > Sent: Tuesday, April 18, 2023 5:31 AM
> > To: Frank Li ; Lorenzo Pieralisi 
> > Cc: Krzysztof Wilczyński ; Manivannan Sadhasivam
> > ; Kishon Vijay Abraham I ; Bjorn
> > Helgaas ; Michael S. Tsirkin ;
> > Jason Wang ; Jon Mason ;
> > Randy Dunlap ; Ren Zhijie
> > ; linux-ker...@vger.kernel.org; linux-
> > p...@vger.kernel.org; virtualization@lists.linux-foundation.org
> > Subject: Re: [EXT] [RFC PATCH 0/3] Introduce a PCIe endpoint virtio console
> >
> > Caution: EXT Email
> >
> > On 2023/04/18 0:19, Frank Li wrote:
> > >
> > >> -Original Message-
> > >> From: Shunsuke Mie 
> > >> Sent: Sunday, April 16, 2023 9:12 PM
> > >> To: Frank Li ; Lorenzo Pieralisi 
> > >> 
> > >> Cc: Krzysztof Wilczyński ; Manivannan Sadhasivam
> > >> ; Kishon Vijay Abraham I ; Bjorn
> > >> Helgaas ; Michael S. Tsirkin ;
> > >> Jason Wang ; Jon Mason ;
> > >> Randy Dunlap ; Ren Zhijie
> > >> ; linux-ker...@vger.kernel.org; linux-
> > >> p...@vger.kernel.org; virtualization@lists.linux-foundation.org
> > >> Subject: Re: [EXT] [RFC PATCH 0/3] Introduce a PCIe endpoint virtio
> > console
> > >>
> > >> Caution: EXT Email
> > >>
> > >> On 2023/04/14 23:39, Frank Li wrote:
> >  -Original Message-
> >  From: Shunsuke Mie 
> >  Sent: Friday, April 14, 2023 7:39 AM
> >  To: Lorenzo Pieralisi 
> >  Cc: Krzysztof Wilczyński ; Manivannan Sadhasivam
> >  ; Kishon Vijay Abraham I ;
> > Bjorn
> >  Helgaas ; Michael S. Tsirkin
> > ;
> >  Jason Wang ; Shunsuke Mie
> > ;
> >  Frank Li ; Jon Mason ; Randy
> >  Dunlap ; Ren Zhijie
> > ;
> >  linux-ker...@vger.kernel.org; linux-...@vger.kernel.org;
> >  virtualization@lists.linux-foundation.org
> >  Subject: [EXT] [RFC PATCH 0/3] Introduce a PCIe endpoint virtio console
> > 
> >  Caution: EXT Email
> > 
> >  PCIe endpoint framework provides APIs to implement PCIe endpoint
> >  function.
> >  This framework allows defining various PCIe endpoint function
> > behaviors
> > >> in
> >  software. This patch extend the framework for virtio pci device. The
> >  virtio is defined to communicate guest on virtual machine and host 
> >  side.
> >  Advantage of the virtio is the efficiency of data transfer and the
> > >> conciseness
> >  of implementation device using software. It also be applied to PCIe
> >  endpoint function.
> > 
> >  We designed and implemented a PCIe EP virtio console function driver
> > >> using
> >  the extended PCIe endpoint framework for virtio. It can be
> > communicate
> >  host and endpoint over virtio as console.
> > 
> >  An architecture of the function driver is following:
> > 
> > ┌┐ ┌───
> > ─
> > >> ──
> >  ┬┐
> > │virtio  │ │  │virtio  │
> > │console drv │ ├───┐  │
> > >> console
> >  drv │
> > ├┤ │(virtio console│  ├──
> > ─
> > >> ──
> >  ───┤
> > │ virtio bus │ │ device)   │◄►│ virtio bus │
> > ├┤ ├---┤  └───
> > ─
> > >> ──
> >  ──┤
> > ││ │ pci ep virtio │   │
> > │  pci bus   │ │  console drv  │   │
> > ││  pcie   ├───┤
> > │
> > ││ ◄─► │  pci ep Bus   │   │
> > └┘ └───
> > ─
> > >> ──
> >  ─┴───┘
> >   PCIe Root  PCIe Endpoint
> > 
> > >>> [Frank Li] Some basic question,
> > >>> I see you call register_virtio_device at epf_vcon_setup_vdev,
> > >>> Why call it as virtio console?  I suppose it should be virtiobus 
> > >>> directly?
> > >> I'm sorry I didn't understand your question. What do you mean the
> > >> virtiobus directly?
> > > I go through your code again.  I think I understand why you need pci-epf-
> > vcon.c.
> > > Actually,  my means is like virtio_mmio_probe.
> > >
> > > vm_dev->vdev.id.device = readl(vm_dev->base +
> > VIRTIO_MMIO_DEVICE_ID);
> > > vm_dev->vdev.id.vendor = readl(vm_dev->base +
> > VIRTIO_MMIO_VENDOR_ID);
> > >
> > > I am not sure that if VIRTIO_MMIO_VENDOR_ID and
> > VIRTIO_MMIO_DEVICE_ID
> > > reuse PCI's vendor ID and Device ID.  If yes, you can directly get such
> > information
> > > from epf.  If no,  a customer field can been added at epf driver.
> > >
> > > So you needn't write pci-epf-vcon  and pci-epf-vnet .
> > >
> > > Of cause it will be wonderful if directly use virtio_mmio_probe by 
> > > dynmatic
> > create platform
> > > Devices.  It may have some difficult because pci memory map requirement.
> > I think that some fields are shared between the vdev and epf device.
> > However, we need to implement device emulation 

Re: [PATCH 0/5] vDPA/ifcvf: implement immediate initialization mechanism

2023-04-24 Thread Michael S. Tsirkin
On Mon, Apr 24, 2023 at 01:20:12PM +0800, Jason Wang wrote:
> On Mon, Apr 24, 2023 at 12:53 PM Michael S. Tsirkin  wrote:
> >
> > On Mon, Apr 24, 2023 at 11:50:20AM +0800, Jason Wang wrote:
> > > On Thu, Apr 20, 2023 at 5:17 PM Zhu, Lingshan  
> > > wrote:
> > > >
> > > >
> > > >
> > > > On 4/3/2023 6:10 PM, Zhu, Lingshan wrote:
> > > > >
> > > > >
> > > > > On 4/3/2023 1:28 PM, Jason Wang wrote:
> > > > >> On Fri, Mar 31, 2023 at 8:49 PM Zhu Lingshan 
> > > > >> wrote:
> > > > >>> Formerly, ifcvf driver has implemented a lazy-initialization 
> > > > >>> mechanism
> > > > >>> for the virtqueues and other config space contents,
> > > > >>> it would store all configurations that passed down from the 
> > > > >>> userspace,
> > > > >>> then load them to the device config space upon DRIVER_OK.
> > > > >>>
> > > > >>> This can not serve live migration, so this series implement an
> > > > >>> immediate initialization mechanism, which means rather than the
> > > > >>> former store-load process, the virtio operations like vq ops
> > > > >>> would take immediate actions by access the virtio registers.
> > > > >> Is there any chance that ifcvf can use virtio_pci_modern_dev library?
> > > > >>
> > > > >> Then we don't need to duplicate the codes.
> > > > >>
> > > > >> Note that pds_vdpa will be the second user for virtio_pci_modern_dev
> > > > >> library (and the first vDPA parent to use that library).
> > > > > Yes I agree this library can help a lot for a standard virtio pci 
> > > > > device.
> > > > > But this change would be huge, its like require to change every line 
> > > > > of
> > > > > the driver. For example current driver functions work on the adapter 
> > > > > and
> > > > > ifcvf_hw, if we wants to reuse the lib, we need the driver work on
> > > > > struct virtio_pci_modern_device.
> > > > > Almost need to re-write the driver.
> > > > >
> > > > > Can we plan this huge change in following series?
> > > > ping
> > >
> > > Will go through this this week.
> > >
> > > Thanks
> >
> > why do you expect it to go through, you didn't ack?
> 
> I meant I will have a look at it this week,
> 
> (Google told me "go through" meant "to look at or examine something 
> carefully")
> 
> Thanks

Oh, I misread what you wrote. Indeed. For some reason I read "this will
go through this week". That would have been different.


> >
> > > > >
> > > > > Thanks,
> > > > > Zhu Lingshan
> > > > >>
> > > > >> Thanks
> > > > >>
> > > > >>> This series also implement irq synchronization in the reset
> > > > >>> routine
> > > > >>>
> > > > >>> Zhu Lingshan (5):
> > > > >>>virt queue ops take immediate actions
> > > > >>>get_driver_features from virito registers
> > > > >>>retire ifcvf_start_datapath and ifcvf_add_status
> > > > >>>synchronize irqs in the reset routine
> > > > >>>a vendor driver should not set _CONFIG_S_FAILED
> > > > >>>
> > > > >>>   drivers/vdpa/ifcvf/ifcvf_base.c | 162
> > > > >>> +++-
> > > > >>>   drivers/vdpa/ifcvf/ifcvf_base.h |  16 ++--
> > > > >>>   drivers/vdpa/ifcvf/ifcvf_main.c |  97 ---
> > > > >>>   3 files changed, 122 insertions(+), 153 deletions(-)
> > > > >>>
> > > > >>> --
> > > > >>> 2.39.1
> > > > >>>
> > > > >
> > > >
> >

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 0/5] vDPA/ifcvf: implement immediate initialization mechanism

2023-04-24 Thread Zhu, Lingshan




On 4/24/2023 12:51 PM, Michael S. Tsirkin wrote:

On Sat, Apr 01, 2023 at 04:48:49AM +0800, Zhu Lingshan wrote:

Formerly, ifcvf driver has implemented a lazy-initialization mechanism
for the virtqueues and other config space contents,
it would store all configurations that passed down from the userspace,
then load them to the device config space upon DRIVER_OK.

This can not serve live migration, so this series implement an
immediate initialization mechanism, which means rather than the
former store-load process, the virtio operations like vq ops
would take immediate actions by access the virtio registers.

This series also implement irq synchronization in the reset
routine


Please, prefix each patch subject with vDPA/ifcvf:

I will fix this in V2. Thanks




Zhu Lingshan (5):
   virt queue ops take immediate actions
   get_driver_features from virito registers
   retire ifcvf_start_datapath and ifcvf_add_status
   synchronize irqs in the reset routine
   a vendor driver should not set _CONFIG_S_FAILED

  drivers/vdpa/ifcvf/ifcvf_base.c | 162 +++-
  drivers/vdpa/ifcvf/ifcvf_base.h |  16 ++--
  drivers/vdpa/ifcvf/ifcvf_main.c |  97 ---
  3 files changed, 122 insertions(+), 153 deletions(-)

--
2.39.1


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/5] get_driver_features from virito registers

2023-04-24 Thread Zhu, Lingshan




On 4/24/2023 12:50 PM, Michael S. Tsirkin wrote:

subj typo: virtio

will fix in V2, thanks!


On Sat, Apr 01, 2023 at 04:48:51AM +0800, Zhu Lingshan wrote:

This commit implements a new function ifcvf_get_driver_feature()
which read driver_features from virtio registers.

To be less ambiguous, ifcvf_set_features() is renamed to
ifcvf_set_driver_features(), and ifcvf_get_features()
is renamed to ifcvf_get_dev_features() which returns
the provisioned vDPA device features.

Signed-off-by: Zhu Lingshan 
---
  drivers/vdpa/ifcvf/ifcvf_base.c | 38 +
  drivers/vdpa/ifcvf/ifcvf_base.h |  5 +++--
  drivers/vdpa/ifcvf/ifcvf_main.c |  9 +---
  3 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
index 6c5650f73007..546e923bcd16 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.c
+++ b/drivers/vdpa/ifcvf/ifcvf_base.c
@@ -204,11 +204,29 @@ u64 ifcvf_get_hw_features(struct ifcvf_hw *hw)
return features;
  }
  
-u64 ifcvf_get_features(struct ifcvf_hw *hw)

+/* return provisioned vDPA dev features */
+u64 ifcvf_get_dev_features(struct ifcvf_hw *hw)
  {
return hw->dev_features;
  }
  
+u64 ifcvf_get_driver_features(struct ifcvf_hw *hw)

+{
+   struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
+   u32 features_lo, features_hi;
+   u64 features;
+
+   vp_iowrite32(0, >device_feature_select);
+   features_lo = vp_ioread32(>guest_feature);
+
+   vp_iowrite32(1, >device_feature_select);
+   features_hi = vp_ioread32(>guest_feature);
+
+   features = ((u64)features_hi << 32) | features_lo;
+
+   return features;
+}
+
  int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features)
  {
if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)) && features) {
@@ -275,7 +293,7 @@ void ifcvf_write_dev_config(struct ifcvf_hw *hw, u64 offset,
vp_iowrite8(*p++, hw->dev_cfg + offset + i);
  }
  
-static void ifcvf_set_features(struct ifcvf_hw *hw, u64 features)

+void ifcvf_set_driver_features(struct ifcvf_hw *hw, u64 features)
  {
struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
  
@@ -286,19 +304,6 @@ static void ifcvf_set_features(struct ifcvf_hw *hw, u64 features)

vp_iowrite32(features >> 32, >guest_feature);
  }
  
-static int ifcvf_config_features(struct ifcvf_hw *hw)

-{
-   ifcvf_set_features(hw, hw->req_features);
-   ifcvf_add_status(hw, VIRTIO_CONFIG_S_FEATURES_OK);
-
-   if (!(ifcvf_get_status(hw) & VIRTIO_CONFIG_S_FEATURES_OK)) {
-   IFCVF_ERR(hw->pdev, "Failed to set FEATURES_OK status\n");
-   return -EIO;
-   }
-
-   return 0;
-}
-
  u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid)
  {
struct ifcvf_lm_cfg __iomem *ifcvf_lm;
@@ -387,9 +392,6 @@ int ifcvf_start_hw(struct ifcvf_hw *hw)
ifcvf_add_status(hw, VIRTIO_CONFIG_S_ACKNOWLEDGE);
ifcvf_add_status(hw, VIRTIO_CONFIG_S_DRIVER);
  
-	if (ifcvf_config_features(hw) < 0)

-   return -EINVAL;
-
ifcvf_add_status(hw, VIRTIO_CONFIG_S_DRIVER_OK);
  
  	return 0;

diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h
index d545a9411143..cb19196c3ece 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.h
+++ b/drivers/vdpa/ifcvf/ifcvf_base.h
@@ -69,7 +69,6 @@ struct ifcvf_hw {
phys_addr_t notify_base_pa;
u32 notify_off_multiplier;
u32 dev_type;
-   u64 req_features;
u64 hw_features;
/* provisioned device features */
u64 dev_features;
@@ -122,7 +121,7 @@ u8 ifcvf_get_status(struct ifcvf_hw *hw);
  void ifcvf_set_status(struct ifcvf_hw *hw, u8 status);
  void io_write64_twopart(u64 val, u32 *lo, u32 *hi);
  void ifcvf_reset(struct ifcvf_hw *hw);
-u64 ifcvf_get_features(struct ifcvf_hw *hw);
+u64 ifcvf_get_dev_features(struct ifcvf_hw *hw);
  u64 ifcvf_get_hw_features(struct ifcvf_hw *hw);
  int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features);
  u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid);
@@ -137,4 +136,6 @@ int ifcvf_set_vq_address(struct ifcvf_hw *hw, u16 qid, u64 
desc_area,
 u64 driver_area, u64 device_area);
  bool ifcvf_get_vq_ready(struct ifcvf_hw *hw, u16 qid);
  void ifcvf_set_vq_ready(struct ifcvf_hw *hw, u16 qid, bool ready);
+void ifcvf_set_driver_features(struct ifcvf_hw *hw, u64 features);
+u64 ifcvf_get_driver_features(struct ifcvf_hw *hw);
  #endif /* _IFCVF_H_ */
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index 1357c67014ab..4588484bd53d 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -410,7 +410,7 @@ static u64 ifcvf_vdpa_get_device_features(struct 
vdpa_device *vdpa_dev)
u64 features;
  
  	if (type == VIRTIO_ID_NET || type == VIRTIO_ID_BLOCK)

-   features = ifcvf_get_features(vf);
+   features = ifcvf_get_dev_features(vf);
else {