Re: [PATCH v3] vringh: IOMEM support

2023-04-27 Thread Shunsuke Mie
Hi Simon-san,

2023年4月27日(木) 4:21 Simon Horman :
>
> On Tue, Apr 25, 2023 at 07:22:50PM +0900, Shunsuke Mie wrote:
> > Introduce a new memory accessor for vringh. It is able to use vringh to
> > virtio rings located on io-memory region.
> >
> > Signed-off-by: Shunsuke Mie 
>
> ...
>
> Hi Mie-san,
>
> thanks for your patch.
> One small nit from me below.
>
> > diff --git a/include/linux/vringh.h b/include/linux/vringh.h
> > index c3a8117dabe8..c03d045f7f3f 100644
> > --- a/include/linux/vringh.h
> > +++ b/include/linux/vringh.h
> > @@ -330,4 +330,37 @@ int vringh_need_notify_iotlb(struct vringh *vrh);
> >
> >  #endif /* CONFIG_VHOST_IOTLB */
> >
> > +#if IS_REACHABLE(CONFIG_VHOST_RING_IOMEM)
> > +
> > +int vringh_init_iomem(struct vringh *vrh, u64 features,
> > +   unsigned int num, bool weak_barriers,
> > +   struct vring_desc *desc,
> > +   struct vring_avail *avail,
> > +   struct vring_used *used);
> > +
> > +
>
> nit: one blank line is enough.
It should not have included this patch. Thanks.
> > +int vringh_getdesc_iomem(struct vringh *vrh,
> > +  struct vringh_kiov *riov,
> > +  struct vringh_kiov *wiov,
> > +  u16 *head,
> > +  gfp_t gfp);
>
> ...
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v3] vringh: IOMEM support

2023-04-27 Thread Shunsuke Mie
2023年4月26日(水) 15:10 Jason Wang :
>
> On Tue, Apr 25, 2023 at 6:23 PM Shunsuke Mie  wrote:
> >
> > Introduce a new memory accessor for vringh. It is able to use vringh to
> > virtio rings located on io-memory region.
>
> Is there a user for this? It would be better if you can describe the
> use cases for this. Maybe you can post the user or at least a link to
> the git as a reference.
This is for the following patch.
https://lore.kernel.org/virtualization/20230427175244.GA261197@bhelgaas/T/#m30a258509caca2413a21f9a9ce0f5fd31d3bd006
I'd like to add a description of how this feature will be used in the
next version.
> >
> > Signed-off-by: Shunsuke Mie 
> > ---
> >
> > Changes from v2: 
> > https://lore.kernel.org/virtualization/20230202090934.549556-1-...@igel.co.jp/
> > - Focus on an adding io memory APIs
> > Remove vringh API unification and some fixes.
> > - Rebase on next-20230414
> >
> >  drivers/vhost/Kconfig  |   6 ++
> >  drivers/vhost/vringh.c | 129 +
> >  include/linux/vringh.h |  33 +++
> >  3 files changed, 168 insertions(+)
> >
> > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > index b455d9ab6f3d..4b0dbb4a8ab3 100644
> > --- a/drivers/vhost/Kconfig
> > +++ b/drivers/vhost/Kconfig
> > @@ -6,6 +6,12 @@ config VHOST_IOTLB
> >   This option is selected by any driver which needs to support
> >   an IOMMU in software.
> >
> > +config VHOST_RING_IOMEM
> > +   tristate
> > +   select VHOST_IOMEM
> > +   help
> > + This option enables vringh APIs to supports io memory space.
>
> There's no specific Kconfig for all the existing accessors. Any reason
> I/O memory is special or do you care about the size of the module?
I followed the IOTLB that is used for vhost and vringh. However, this code has
little effect on the code size and dependencies if included, so I
would like to remove
the Kconfig.
> > +
> >  config VHOST_RING
> > tristate
> > select VHOST_IOTLB
> > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> > index 955d938eb663..ce5a88eecc05 100644
> > --- a/drivers/vhost/vringh.c
> > +++ b/drivers/vhost/vringh.c
> > @@ -1604,4 +1604,133 @@ EXPORT_SYMBOL(vringh_need_notify_iotlb);
> >
> >  #endif
> >
>
> [...]
>
> >
> > base-commit: d3f2cd24819158bb70701c3549e586f9df9cee67
> > prerequisite-patch-id: 760abbe8c981c52ccc421b8139e8999ab71619aa
> > prerequisite-patch-id: 99d8679ab4569545c8af401e84142c66312e953e
> > prerequisite-patch-id: aca81516aba75b58c8422d37c2dc7db2f61ffe92
> > prerequisite-patch-id: 3d76136200c4e55ba2c41681325f242859dd6dbd
> > prerequisite-patch-id: 47a994feb68d95412d81b0fd1fa27bc7ba05ae18
> > prerequisite-patch-id: a2f7fc3f35358f70b6dad4c919ce293b10295c4f
> > prerequisite-patch-id: 70e2ee32b945be96a0388f0ff564651ac9335220
> > prerequisite-patch-id: 2023690f9c47017b56d7f036332a5ca3ece6bde8
> > prerequisite-patch-id: 211e113fec6c450d13fbdb437ecfad67dec0a157
> > prerequisite-patch-id: f2bcd3168933886e4cd4c39e47446d1bd7cb2691
> > prerequisite-patch-id: 37b131560808733a0b8878e85a3d2a46d6ab02ca
> > prerequisite-patch-id: 79b0219a715cb5ace227d55666d62fdb2dcc6ffe
> > prerequisite-patch-id: 30f1740cd48a19aa1c3c93e625c740cae2845478
> > prerequisite-patch-id: 31989e4a521f2fc6f68c4ccdb6960035e87666a7
> > prerequisite-patch-id: 3948bb3e0c045e206a714d17bab16c94775d
> > prerequisite-patch-id: cf28e0115b9111bcb77aa9c710d98b2be93c7e89
> > prerequisite-patch-id: ebf2349c0ae1296663854eee2da0b43fe8972f9b
> > prerequisite-patch-id: fc570921d885a2a6000800b4022321e63f1650a5
> > prerequisite-patch-id: 1fd5219fef17c2bf2d76000207b25aae58c368f3
> > prerequisite-patch-id: 34e5f078202762fe69df471e97b51b1341cbdfa9
> > prerequisite-patch-id: 7fa5151b9e0488b48c2b9d1219152cfb047d6586
> > prerequisite-patch-id: 33cca272767af04ae9abe7af2f6cbb9972cc0b77
> > prerequisite-patch-id: bb1a6befc899dd97bcd946c2d76ce73675a1fa45
> > prerequisite-patch-id: 10be04dd92fa451d13676e91d9094b63cd7fbcf8
> > prerequisite-patch-id: 87b86eb4ce9501bba9c04ec81094ac9202392431
> > prerequisite-patch-id: a5ced28762bf6bd6419dae0e4413d02ccafd72c2
> > prerequisite-patch-id: 2db4c9603e00d69bb0184dabcc319e7f74f30305
> > prerequisite-patch-id: 41933f9d53e5e9e02efd6157b68ee7d92b10cfa2
> > prerequisite-patch-id: df3295b4cdde3a45eaf4c40047179698a4224d05
> > prerequisite-patch-id: 9e2fca9ab0ba2b935daa96f1745ff4c909792231
> > prerequisite-patch-id: 8948378099ba4d61e10a87e617d69ed2fc4104ae
> > prerequisite-patch-id: 5e7466f3f0d74880d1a574a1bd91b12091dcf3f5
> > prerequisite-patch-id: 902899e1cd53b7fcc7971f630aed103830fc3e3d
> > prerequisite-patch-id: 42126b180500f9ff123db78748972c6ece18ac57
> > prerequisite-patch-id: 5236a03ef574074f3c1009a52612051862b31eff
> > prerequisite-patch-id: adae1aa80df65bd02a9e3f4db490cf801c1c6119
> > prerequisite-patch-id: 22806fcabb973ee5f04ee6212db6161aab5bcbfc
> > prerequisite-patch-id: 6eb14cfdc2cf31e90556f6afe7361427a332e8dc
>
> These seem meaningless?
I'm sorry, that didn't make sense. I'll remo

Re: [PATCH net-next v4 12/15] virtio_net: small: avoid double counting in xdp scenarios

2023-04-27 Thread Xuan Zhuo
On Thu, 27 Apr 2023 21:08:25 -0400, "Michael S. Tsirkin"  
wrote:
> On Thu, Apr 27, 2023 at 11:05:31AM +0800, Xuan Zhuo wrote:
> > Avoid the problem that some variables(headroom and so on) will repeat
> > the calculation when process xdp.
> >
> > Signed-off-by: Xuan Zhuo 
> > Acked-by: Jason Wang 
>
>
> this is "code duplication" not "double counting".

Will fix.

Thanks.


>
>
> > ---
> >  drivers/net/virtio_net.c | 12 
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index b8ec642899c9..f832ab8a3e6e 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1027,11 +1027,10 @@ static struct sk_buff *receive_small(struct 
> > net_device *dev,
> > struct sk_buff *skb;
> > struct bpf_prog *xdp_prog;
> > unsigned int xdp_headroom = (unsigned long)ctx;
> > -   unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
> > -   unsigned int headroom = vi->hdr_len + header_offset;
> > -   unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
> > - SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> > struct page *page = virt_to_head_page(buf);
> > +   unsigned int header_offset;
> > +   unsigned int headroom;
> > +   unsigned int buflen;
> >
> > len -= vi->hdr_len;
> > stats->bytes += len;
> > @@ -1059,6 +1058,11 @@ static struct sk_buff *receive_small(struct 
> > net_device *dev,
> > rcu_read_unlock();
> >
> >  skip_xdp:
> > +   header_offset = VIRTNET_RX_PAD + xdp_headroom;
> > +   headroom = vi->hdr_len + header_offset;
> > +   buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
> > +   SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> > +
> > skb = build_skb(buf, buflen);
> > if (!skb)
> > goto err;
> > --
> > 2.32.0.3.g01195cf9f
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-27 Thread Michael S. Tsirkin
On Thu, Apr 27, 2023 at 06:46:18PM +0800, Wenliang Wang wrote:
> For multi-queue and large ring-size use case, the following error
> occurred when free_unused_bufs:
> rcu: INFO: rcu_sched self-detected stall on CPU.
> 
> Signed-off-by: Wenliang Wang 

pls send vN+1 as a new thread not as a reply in existing thread of vN.

> ---
> v2:
> -add need_resched check.
> -apply same logic to sq.
> ---
>  drivers/net/virtio_net.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index ea1bd4bb326d..573558b69a60 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -3559,12 +3559,16 @@ static void free_unused_bufs(struct virtnet_info *vi)
>   struct virtqueue *vq = vi->sq[i].vq;
>   while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
>   virtnet_sq_free_unused_buf(vq, buf);
> + if (need_resched())
> + schedule();
>   }
>  
>   for (i = 0; i < vi->max_queue_pairs; i++) {
>   struct virtqueue *vq = vi->rq[i].vq;
>   while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
>   virtnet_rq_free_unused_buf(vq, buf);
> + if (need_resched())
> + schedule();
>   }
>  }
>  
> -- 
> 2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next v4 12/15] virtio_net: small: avoid double counting in xdp scenarios

2023-04-27 Thread Michael S. Tsirkin
On Thu, Apr 27, 2023 at 11:05:31AM +0800, Xuan Zhuo wrote:
> Avoid the problem that some variables(headroom and so on) will repeat
> the calculation when process xdp.
> 
> Signed-off-by: Xuan Zhuo 
> Acked-by: Jason Wang 


this is "code duplication" not "double counting".


> ---
>  drivers/net/virtio_net.c | 12 
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index b8ec642899c9..f832ab8a3e6e 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1027,11 +1027,10 @@ static struct sk_buff *receive_small(struct 
> net_device *dev,
>   struct sk_buff *skb;
>   struct bpf_prog *xdp_prog;
>   unsigned int xdp_headroom = (unsigned long)ctx;
> - unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
> - unsigned int headroom = vi->hdr_len + header_offset;
> - unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
> -   SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>   struct page *page = virt_to_head_page(buf);
> + unsigned int header_offset;
> + unsigned int headroom;
> + unsigned int buflen;
>  
>   len -= vi->hdr_len;
>   stats->bytes += len;
> @@ -1059,6 +1058,11 @@ static struct sk_buff *receive_small(struct net_device 
> *dev,
>   rcu_read_unlock();
>  
>  skip_xdp:
> + header_offset = VIRTNET_RX_PAD + xdp_headroom;
> + headroom = vi->hdr_len + header_offset;
> + buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
> + SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> +
>   skb = build_skb(buf, buflen);
>   if (!skb)
>   goto err;
> -- 
> 2.32.0.3.g01195cf9f

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [GIT PULL] virtio,vhost,vdpa: features, fixes, cleanups

2023-04-27 Thread pr-tracker-bot
The pull request you sent on Mon, 24 Apr 2023 17:48:42 -0400:

> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/8ccd54fe45713cd458015b5b08d6098545e70543

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC PATCH v2 1/3] PCI: endpoint: introduce a helper to implement pci ep virtio function

2023-04-27 Thread Bjorn Helgaas
Simple typos, don't repost until there's more substantive feedback.

On Thu, Apr 27, 2023 at 07:44:26PM +0900, Shunsuke Mie wrote:
> The Linux PCIe Endpoint framework supports to implement PCIe endpoint
> functions using a PCIe controller operating in endpoint mode.
> It is possble to realize the behavior of PCIe device, such as virtio PCI
> device. This patch introduces a setof helper functions and data structures
> to implement a PCIe endpoint function that behaves as a virtio device.

s/possble/possible/
s/setof/set of/

> Those functions enable the implementation PCIe endpoint function that
> comply with the virtio lecacy specification. Because modern virtio
> specifications require devices to implement custom PCIe capabilities, which
> are not currently supported by either PCIe controllers/drivers or the PCIe
> endpoint framework.

s/implementation PCIe endpoint function/implementation of PCIe endpoint 
functions/
s/lecacy/legacy/ (What does "legacy" mean?  Is there a spec for this?)

I guess "legacy virtio" devices need not implement custom PCIe
capabilities, but "modern virtio" devices must implement them?

Capitalize "Endpoint framework" consistently; sometimes it's
"Endpoint", other times it's "endpoint".

> While this patch provides functions for negotiating with host drivers and
> copying data, each PCIe function driver must impl ement operations that
> depend on each specific device, such as network, block, etc.

s/impl ement/implement/

> +#include 
> +#include 
> +#include 

Typically the header includes would be alphabetized if possible.

> + vq_virt = pci_epc_map_addr(epf->epc, epf->func_no, epf->vfunc_no,
> +vq_pci_addr, vq_phys, vq_size);
> + if (IS_ERR(vq_virt)) {
> + pr_err("Failed to map virtuqueue to local");

s/virtuqueue/virtqueue/

I know you probably don't have any way to use dev_err(), but this
message really needs some context, like a driver name and instance or
something.

> +#define VIRTIO_PCI_LEGACY_CFG_BAR 0

What makes this a "legacy cfg BAR"?  Is there some spec that covers
virtio BAR usage?

> + * epf_virtio_init - initialize struct epf_virtio and setup BAR for virtio
> + * @evio: struct epf_virtio to initialize.
> + * @hdr: pci configuration space to show remote host.
> + * @bar_size: pci BAR size it depends on the virtio device type.

s/pci/PCI/ (there were also a few more instances above in messages or
comments)

> + * epf_virtio_final - finalize struct epf_virtio. it frees bar and memories
> + * @evio: struct epf_virtio to finalize.

s/bar/BAR/

> +static void epf_virtio_monitor_qnotify(struct epf_virtio *evio)
> +{
> + const u16 qn_default = evio->nvq;
> + u16 tmp;
> +
> + /* Since there is no way to synchronize between the host and EP 
> functions,
> +  * it is possible to miss multiple notifications.

Multi-line comment style.

> + err = epf_virtio_negotiate_vq(evio);
> + if (err < 0) {
> + pr_err("failed to negoticate configs with driver\n");

s/negoticate/negotiate/

> + * epf_virtio_reset - reset virtio status

Some of the function descriptions end with a period (".") and others
don't.  Please figure out what the most common style is and use that
consistently.

> + dst = pci_epc_map_addr(epf->epc, epf->func_no,
> +epf->vfunc_no, dbase, &phys,
> +slen);
> + if (IS_ERR(dst)) {
> + pr_err("failed to map pci mmoery spact to 
> local\n");

s/pci/PCI/
s/mmoery/memory/
s/spact/space/ ?

Also below.

IIRC some previous messages started with a capital letter.  Please
make them all consistent.

> + if (dir == DMA_MEM_TO_DEV) {
> + pci_epc_unmap_addr(epf->epc, epf->func_no,
> +epf->vfunc_no, phys, dst, slen);
> + } else {
> + pci_epc_unmap_addr(epf->epc, epf->func_no,
> +epf->vfunc_no, phys, src, slen);
> + }
> + }
> +
> + return 1;

I guess this function returns either a negative error code or the
value 1?  That seems sort of weird (I think "negative error code or
*zero* is more typical), but maybe you're following some convention?

> +#include 
> +#include 
> +#include 
> +#include 

Alpha order if possible

> + /* Virtual address of pci configuration space */

s/pci/PCI/

> + /* Callback function and parameter for queue notifcation
> +  * Note: PCI EP function cannot detect qnotify accurately, therefore 
> this
> +  * callback function should check all of virtqueue's changes.
> +  */

Multi-line comment style.

Bjorn
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC PATCH v2 3/3] PCI: endpoint: Add EP function driver to provide virtio-console functionality

2023-04-27 Thread Bjorn Helgaas
Random typos and trivial things.  No need to repost until somebody
does a more substantive review.

On Thu, Apr 27, 2023 at 07:44:28PM +0900, Shunsuke Mie wrote:
> Add a new PCIe endpoint function driver that works as a pci virtio-console
> device. The console connect to endpoint side console. It enables to
> communicate PCIe host and endpoint.

s/pci/PCI/

> Architecture is following:
> 
>  ┌┐ ┌──┬┐
>  │virtioe │ │  │virtio  │
>  │console drv │ ├───┐  │console drv │
>  ├┤ │(virtio console│  ├┤
>  │ virtio bus │ │ device)   │◄►│ virtio bus │
>  ├┤ ├---┤  └┤
>  ││ │ pci ep virtio │   │
>  │  pci bus   │ │  console drv  │   │
>  ││  pcie   ├───┤   │
>  ││ ◄─► │  pci ep Bus   │   │
>  └┘ └───┴───┘
>PCIe Root  PCIe Endpoint

s/virtioe/virtio/
s/pci/PCI/
s/pcie/PCIe/
s/ep/EP/

> +config PCI_EPF_VCON
> + tristate "PCI Endpoint virito-console driver"

s/virito/virtio/

> + depends on PCI_ENDPOINT
> + select VHOST_RING
> + select PCI_EPF_VIRTIO
> + help
> +   PCIe Endpoint virtio-console function implementatino. This module
> +   enables to show the virtio-console as pci device to PCIe host side, 
> and
> +   another virtual virtio-console device registers to endpoint system.
> +   Those devices are connected virtually and can communicate each other.

s/implementatino/implementation/
s/pci/PCI/
s/can communicate/can communicate with/

> + * PCI Endpoint function driver to impliment virtio-console device
> + * functionality.

s/impliment/implement/

> +static int virtio_queue_size = 0x100;
> +module_param(virtio_queue_size, int, 0444);
> +MODULE_PARM_DESC(virtio_queue_size, "A length of virtqueue");

When and why would users need this parameter?  Where is it documented?

> + /* To access virtqueus of local host driver */

s/virtqueus/virtqueues/

> + /* To show a status whether this driver is ready and the remote is 
> connected */

Make this fit in 80 columns.

> + /* This is a minimum implementation. Doesn't support any features of
> +  * virtio console. It means driver and device use just 2 virtuque for tx
> +  * and rx.
> +  */

Use common multi-line comment style:

  /*
   * This is ...
   */

s/virtuque/virtqueues/

> +static void epf_vcon_raise_irq_handler(struct work_struct *work)
> +{
> + struct epf_vcon *vcon =
> + container_of(work, struct epf_vcon, raise_irq_work);

Rewrap.

> +static int epf_vcon_setup_common(struct epf_vcon *vcon)
> +{
> + vcon->features = 0;
> + vcon->connected = false;
> +
> + vcon->task_wq =
> + alloc_workqueue("pci-epf-vcon/task-wq",

Looks like this would fit on the previous line?

> + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0);

> +static void epf_vcon_initialize_complete(void *param)
> +{
> + struct epf_vcon *vcon = param;
> +
> + pr_debug("Remote host has connected\n");

Is there any device info you could include here, e.g., with dev_dbg()?
It's nice if users have a little context.

> +static int epf_vcon_setup_ep_func(struct epf_vcon *vcon, struct pci_epf *epf)
> +{
> + int err;
> + struct epf_virtio *evio = &vcon->evio;
> + unsigned int nvq = epf_vcon_get_nvq(vcon);
> +
> + vcon->rdev_iovs =
> + kmalloc_array(nvq, sizeof(vcon->rdev_iovs[0]), GFP_KERNEL);

Move the function name and as many parameters as fit in 80 columns to
the previous line to match prevailing style.

> + /* There is no config for virtio console because this console device
> +  * doesn't any support features
> +  */

Multi-line comment style.

s/doesn't any support/doesn't support any/?  Is that what you mean?

> + /* Do nothing because this console device doesn't any support features 
> */

Same.

> +static void epf_vcon_vdev_set_status(struct virtio_device *vdev, u8 status)
> +{
> + if (status & VIRTIO_CONFIG_S_FAILED)
> + pr_debug("driver failed to setup this device\n");

dev_dbg() if possible.

> + err = vringh_init_kern(&vcon->vdev_vrhs[i], vcon->features,
> +virtio_queue_size, false, vring->desc,
> +vring->avail, vring->used);
> + if (err) {
> + pr_err("failed to init vringh for vring %d\n", i);

dev_err() if possible.

> +static int epf_vcon_setup_vdev(struct epf_vcon *vcon, struct device *parent)
> +{
> + int err;
> + struct virtio_device *vdev = &vcon->vdev;
> + const unsigned int nvq = epf_vcon_get_nvq(vcon);
> +
> + vcon->vdev_vrhs =
> + kmalloc_array(nvq, size

Re: [RFC PATCH v2 0/3] Introduce a PCIe endpoint virtio console

2023-04-27 Thread Bjorn Helgaas
On Thu, Apr 27, 2023 at 07:44:25PM +0900, Shunsuke Mie wrote:
> ...
>   PCI: endpoint: introduce a helper to implement pci ep virtio function
>   virtio_pci: add a definition of queue flag in ISR
>   PCI: endpoint: Add EP function driver to provide virtio-console
> functionality

Capitalize the first word consistently to match history ("Introduce",
"Add").

Capitalize "PCI" in English text.

Capitalize "EP" since it's sort of an acronym (you did once, but do it
both places :))

Bjorn
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH v2 3/3] PCI: endpoint: Add EP function driver to provide virtio-console functionality

2023-04-27 Thread Shunsuke Mie
Add a new PCIe endpoint function driver that works as a pci virtio-console
device. The console connect to endpoint side console. It enables to
communicate PCIe host and endpoint.

Architecture is following:

 ┌┐ ┌──┬┐
 │virtioe │ │  │virtio  │
 │console drv │ ├───┐  │console drv │
 ├┤ │(virtio console│  ├┤
 │ virtio bus │ │ device)   │◄►│ virtio bus │
 ├┤ ├---┤  └┤
 ││ │ pci ep virtio │   │
 │  pci bus   │ │  console drv  │   │
 ││  pcie   ├───┤   │
 ││ ◄─► │  pci ep Bus   │   │
 └┘ └───┴───┘
   PCIe Root  PCIe Endpoint

This driver has two roles. The first is as a PCIe endpoint virtio console
function, which is implemented using the PCIe endpoint framework and PCIe
EP virtio helpers. The second is as a virtual virtio console device
connected to the virtio bus on PCIe endpoint Linux.

Communication between the two is achieved by copying the virtqueue data
between PCIe root and endpoint, respectively.

This is a simple implementation and does not include features of
virtio-console such as MULTIPORT, EMERG_WRITE, etc. As a result, each
virtio console driver only displays /dev/hvc0.

As an example of usage, by setting getty to /dev/hvc0, it is possible to
login to another host.

Signed-off-by: Shunsuke Mie 
---
Changes from v2:
- Change to use copy functions between kiovs of pci-epf-virtio.

 drivers/pci/endpoint/functions/Kconfig|  12 +
 drivers/pci/endpoint/functions/Makefile   |   1 +
 drivers/pci/endpoint/functions/pci-epf-vcon.c | 596 ++
 3 files changed, 609 insertions(+)
 create mode 100644 drivers/pci/endpoint/functions/pci-epf-vcon.c

diff --git a/drivers/pci/endpoint/functions/Kconfig 
b/drivers/pci/endpoint/functions/Kconfig
index fa1a6a569a8f..9ce2698b67e1 100644
--- a/drivers/pci/endpoint/functions/Kconfig
+++ b/drivers/pci/endpoint/functions/Kconfig
@@ -44,3 +44,15 @@ config PCI_EPF_VIRTIO
select VHOST_RING_IOMEM
help
  Helpers to implement PCI virtio Endpoint function
+
+config PCI_EPF_VCON
+   tristate "PCI Endpoint virito-console driver"
+   depends on PCI_ENDPOINT
+   select VHOST_RING
+   select PCI_EPF_VIRTIO
+   help
+ PCIe Endpoint virtio-console function implementatino. This module
+ enables to show the virtio-console as pci device to PCIe host side, 
and
+ another virtual virtio-console device registers to endpoint system.
+ Those devices are connected virtually and can communicate each other.
+
diff --git a/drivers/pci/endpoint/functions/Makefile 
b/drivers/pci/endpoint/functions/Makefile
index a96f127ce900..b4056689ce33 100644
--- a/drivers/pci/endpoint/functions/Makefile
+++ b/drivers/pci/endpoint/functions/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_PCI_EPF_TEST)  += pci-epf-test.o
 obj-$(CONFIG_PCI_EPF_NTB)  += pci-epf-ntb.o
 obj-$(CONFIG_PCI_EPF_VNTB) += pci-epf-vntb.o
 obj-$(CONFIG_PCI_EPF_VIRTIO)   += pci-epf-virtio.o
+obj-$(CONFIG_PCI_EPF_VCON) += pci-epf-vcon.o
diff --git a/drivers/pci/endpoint/functions/pci-epf-vcon.c 
b/drivers/pci/endpoint/functions/pci-epf-vcon.c
new file mode 100644
index ..31f4247cd10f
--- /dev/null
+++ b/drivers/pci/endpoint/functions/pci-epf-vcon.c
@@ -0,0 +1,596 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PCI Endpoint function driver to impliment virtio-console device
+ * functionality.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "pci-epf-virtio.h"
+
+static int virtio_queue_size = 0x100;
+module_param(virtio_queue_size, int, 0444);
+MODULE_PARM_DESC(virtio_queue_size, "A length of virtqueue");
+
+struct epf_vcon {
+   /* To access virtqueues on remote host */
+   struct epf_virtio evio;
+   struct vringh_kiov *rdev_iovs;
+
+   /* To register a local virtio bus */
+   struct virtio_device vdev;
+
+   /* To access virtqueus of local host driver */
+   struct vringh *vdev_vrhs;
+   struct vringh_kiov *vdev_iovs;
+   struct virtqueue **vdev_vqs;
+
+   /* For transportation and notification */
+   struct workqueue_struct *task_wq;
+   struct work_struct raise_irq_work, rx_work, tx_work;
+
+   /* To retain virtio features. It is commonly used local and remote. */
+   u64 features;
+
+   /* To show a status whether this driver is ready and the remote is 
connected */
+   bool connected;
+};
+
+enum {
+   VCON_VIRTQUEUE_RX,
+   VCON_VIRTQUEUE_TX,
+   // Should be end of enum
+   VCON_VIRTQUEUE_NUM
+};
+
+static struct epf_vcon *vdev_to_vcon(struct virtio_device *vdev)
+{
+   return co

[RFC PATCH v2 0/3] Introduce a PCIe endpoint virtio console

2023-04-27 Thread Shunsuke Mie
PCIe endpoint framework provides APIs to implement PCIe endpoint function.
This framework allows defining various PCIe endpoint function behaviors in
software. This patch extend the framework for virtio pci device. The
virtio is defined to communicate guest on virtual machine and host side.
Advantage of the virtio is the efficiency of data transfer and the conciseness
of implementation device using software. It also be applied to PCIe
endpoint function.

We designed and implemented a PCIe EP virtio console function driver using
the extended PCIe endpoint framework for virtio. It can be communicate
host and endpoint over virtio as console.

An architecture of the function driver is following:

 ┌┐ ┌──┬┐
 │virtio  │ │  │virtio  │
 │console drv │ ├───┐  │console drv │
 ├┤ │(virtio console│  ├┤
 │ virtio bus │ │ device)   │◄►│ virtio bus │
 ├┤ ├---┤  └┤
 ││ │ pci ep virtio │   │
 │  pci bus   │ │  console drv  │   │
 ││  pcie   ├───┤   │
 ││ ◄─► │  pci ep Bus   │   │
 └┘ └───┴───┘
   PCIe Root  PCIe Endpoint

Introduced driver is `pci ep virtio console drv` in the figure. It works
as ep function for PCIe root and virtual virtio console device for PCIe
endpoint. Each side of virtio console driver has virtqueue, and
introduced driver transfers data on the virtqueue to each other. A data
on root tx queue is transfered to endpoint rx queue and vice versa.

This patchset is depend follwing patches which are under discussion.

- [RFC PATCH 0/3] Deal with alignment restriction on EP side
link: https://lore.kernel.org/linux-pci/20230113090350.1103494-1-...@igel.co.jp/
- [PATCH v3] vringh: IOMEM support
link: 
https://lore.kernel.org/virtualization/CACGkMEumt4p7jU+H+T-b9My0buhdS8a-1GCSnWjnCwMAM=w...@mail.gmail.com/T/#t

First of this patchset is introduce a helper function to realize pci
virtio function using PCIe endpoint framework. The second one is adding
a missing definition for virtio pci header. The last one is for PCIe
endpoint virtio console driver.

This is tested on next-20230416 and RCar S4 board as PCIe endpoint.

Shunsuke Mie (3):
  PCI: endpoint: introduce a helper to implement pci ep virtio function
  virtio_pci: add a definition of queue flag in ISR
  PCI: endpoint: Add EP function driver to provide virtio-console
functionality

 drivers/pci/endpoint/functions/Kconfig|  19 +
 drivers/pci/endpoint/functions/Makefile   |   2 +
 drivers/pci/endpoint/functions/pci-epf-vcon.c | 596 ++
 .../pci/endpoint/functions/pci-epf-virtio.c   | 458 ++
 .../pci/endpoint/functions/pci-epf-virtio.h   | 126 
 include/uapi/linux/virtio_pci.h   |   3 +
 6 files changed, 1204 insertions(+)
 create mode 100644 drivers/pci/endpoint/functions/pci-epf-vcon.c
 create mode 100644 drivers/pci/endpoint/functions/pci-epf-virtio.c
 create mode 100644 drivers/pci/endpoint/functions/pci-epf-virtio.h

-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[RFC PATCH v2 2/3] virtio_pci: add a definition of queue flag in ISR

2023-04-27 Thread Shunsuke Mie
Already it has beed defined a config changed flag of ISR, but not the queue
flag. Add a macro for it.

Signed-off-by: Shunsuke Mie 
---
 include/uapi/linux/virtio_pci.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h
index f703afc7ad31..d405bea27240 100644
--- a/include/uapi/linux/virtio_pci.h
+++ b/include/uapi/linux/virtio_pci.h
@@ -94,6 +94,9 @@
 
 #endif /* VIRTIO_PCI_NO_LEGACY */
 
+/* Bits for ISR status field:only when if MSI-X is disabled */
+/* The bit of the ISR which indicates a queue entry update. */
+#define VIRTIO_PCI_ISR_QUEUE   0x1
 /* The bit of the ISR which indicates a device configuration change. */
 #define VIRTIO_PCI_ISR_CONFIG  0x2
 /* Vector value used to disable MSI for queue */
-- 
2.25.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH v2 1/3] PCI: endpoint: introduce a helper to implement pci ep virtio function

2023-04-27 Thread Shunsuke Mie
The Linux PCIe Endpoint framework supports to implement PCIe endpoint
functions using a PCIe controller operating in endpoint mode.
It is possble to realize the behavior of PCIe device, such as virtio PCI
device. This patch introduces a setof helper functions and data structures
to implement a PCIe endpoint function that behaves as a virtio device.

Those functions enable the implementation PCIe endpoint function that
comply with the virtio lecacy specification. Because modern virtio
specifications require devices to implement custom PCIe capabilities, which
are not currently supported by either PCIe controllers/drivers or the PCIe
endpoint framework.

The helper functions work with the new struct epf_virtio, which is
initialized and finalized using the following functions:

- int epf_virtio_init();
- void epf_virtio_final()

Once initialized, the PCIe configuration space can be read and written
using the following functions:

- epf_virtio_cfg_{read,write}#size()
- epf_virtio_cfg_{set,clear}#size()
- epf_virtio_cfg_memcpy_toio()

The size is supported 8, 16 and 32bits. The content of configuration space
varies depending on the type of virtio device.

After the setup, launch the kernel thread for negotiation with host virtio
drivers and detection virtqueue notifications with the function:

- epf_virtio_launch_bgtask()

Also there are functions to shutdown and reset the kthread.

- epf_virtio_terminate_bgtask()
- epf_virtio_terminate_reset()

Data transfer function is also provide.

- epf_virtio_memcpy_kiov2kiov()

While this patch provides functions for negotiating with host drivers and
copying data, each PCIe function driver must impl ement operations that
depend on each specific device, such as network, block, etc.

Signed-off-by: Shunsuke Mie 
---

Changes from v2:
- Improve the memcpy function between kiov and kiov on local ram and
remote ram via pcie bus.

 drivers/pci/endpoint/functions/Kconfig|   7 +
 drivers/pci/endpoint/functions/Makefile   |   1 +
 .../pci/endpoint/functions/pci-epf-virtio.c   | 458 ++
 .../pci/endpoint/functions/pci-epf-virtio.h   | 126 +
 4 files changed, 592 insertions(+)
 create mode 100644 drivers/pci/endpoint/functions/pci-epf-virtio.c
 create mode 100644 drivers/pci/endpoint/functions/pci-epf-virtio.h

diff --git a/drivers/pci/endpoint/functions/Kconfig 
b/drivers/pci/endpoint/functions/Kconfig
index 9fd560886871..fa1a6a569a8f 100644
--- a/drivers/pci/endpoint/functions/Kconfig
+++ b/drivers/pci/endpoint/functions/Kconfig
@@ -37,3 +37,10 @@ config PCI_EPF_VNTB
  between PCI Root Port and PCIe Endpoint.
 
  If in doubt, say "N" to disable Endpoint NTB driver.
+
+config PCI_EPF_VIRTIO
+   tristate
+   depends on PCI_ENDPOINT
+   select VHOST_RING_IOMEM
+   help
+ Helpers to implement PCI virtio Endpoint function
diff --git a/drivers/pci/endpoint/functions/Makefile 
b/drivers/pci/endpoint/functions/Makefile
index 5c13001deaba..a96f127ce900 100644
--- a/drivers/pci/endpoint/functions/Makefile
+++ b/drivers/pci/endpoint/functions/Makefile
@@ -6,3 +6,4 @@
 obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
 obj-$(CONFIG_PCI_EPF_NTB)  += pci-epf-ntb.o
 obj-$(CONFIG_PCI_EPF_VNTB) += pci-epf-vntb.o
+obj-$(CONFIG_PCI_EPF_VIRTIO)   += pci-epf-virtio.o
diff --git a/drivers/pci/endpoint/functions/pci-epf-virtio.c 
b/drivers/pci/endpoint/functions/pci-epf-virtio.c
new file mode 100644
index ..f67610dd247d
--- /dev/null
+++ b/drivers/pci/endpoint/functions/pci-epf-virtio.c
@@ -0,0 +1,458 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Helpers to implement PCIe virtio EP function.
+ */
+#include 
+#include 
+#include 
+
+#include "pci-epf-virtio.h"
+
+static void epf_virtio_unmap_vq(struct pci_epf *epf, void __iomem *vq_virt,
+   phys_addr_t vq_phys, unsigned int num)
+{
+   size_t vq_size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+
+   pci_epc_unmap_addr(epf->epc, epf->func_no, epf->vfunc_no, vq_phys,
+  vq_virt, vq_size);
+   pci_epc_mem_free_addr(epf->epc, vq_phys, vq_virt, vq_size);
+}
+
+static void __iomem *epf_virtio_map_vq(struct pci_epf *epf,
+  phys_addr_t vq_pci_addr,
+  unsigned int num, phys_addr_t *vq_phys)
+{
+   int err;
+   size_t vq_size;
+   void __iomem *vq_virt;
+
+   vq_size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+
+   vq_virt = pci_epc_map_addr(epf->epc, epf->func_no, epf->vfunc_no,
+  vq_pci_addr, vq_phys, vq_size);
+   if (IS_ERR(vq_virt)) {
+   pr_err("Failed to map virtuqueue to local");
+   goto err_free;
+   }
+
+   return vq_virt;
+
+err_free:
+   pci_epc_mem_free_addr(epf->epc, *vq_phys, vq_virt, vq_size);
+
+   return ERR_PTR(err);
+}
+
+static void epf_virtio_free_vringh(struct pci_epf *epf, struct epf_vr

Re: [PATCH 4/5] synchronize irqs in the reset routine

2023-04-27 Thread Zhu, Lingshan



On 4/26/2023 1:06 PM, Jason Wang wrote:


在 2023/4/1 04:48, Zhu Lingshan 写道:

This commit synchronize irqs of the virtqueues
and config space in the reset routine.
Thus ifcvf_stop_hw() and reset() are refactored as well.

Signed-off-by: Zhu Lingshan 
---
  drivers/vdpa/ifcvf/ifcvf_base.c | 61 ++---
  drivers/vdpa/ifcvf/ifcvf_main.c | 45 +++-
  2 files changed, 54 insertions(+), 52 deletions(-)

diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c 
b/drivers/vdpa/ifcvf/ifcvf_base.c

index 79e313c5e10e..49949aec20ef 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.c
+++ b/drivers/vdpa/ifcvf/ifcvf_base.c
@@ -170,12 +170,7 @@ void ifcvf_set_status(struct ifcvf_hw *hw, u8 
status)

    void ifcvf_reset(struct ifcvf_hw *hw)
  {
-    hw->config_cb.callback = NULL;
-    hw->config_cb.private = NULL;
-
  ifcvf_set_status(hw, 0);
-    /* flush set_status, make sure VF is stopped, reset */
-    ifcvf_get_status(hw);



If we don't flush or poll how can we know the reset is done?

E.g modern virtio-pci did:

    /* 0 status means a reset. */
    vp_modern_set_status(mdev, 0);
    /* After writing 0 to device_status, the driver MUST wait for 
a read of

 * device_status to return 0 before reinitializing the device.
 * This will flush out the status write, and flush in device 
writes,

 * including MSI-X interrupts, if any.
 */
    while (vp_modern_get_status(mdev))
    msleep(1);
    /* Flush pending VQ/configuration callbacks. */
       vp_synchronize_vectors(vdev);

Thanks, I can implement a similar get_status() here.



  }
    u64 ifcvf_get_hw_features(struct ifcvf_hw *hw)
@@ -368,20 +363,62 @@ void ifcvf_set_vq_ready(struct ifcvf_hw *hw, 
u16 qid, bool ready)

  vp_iowrite16(ready, &cfg->queue_enable);
  }
  -static void ifcvf_hw_disable(struct ifcvf_hw *hw)
+static void synchronize_per_vq_irq(struct ifcvf_hw *hw)
  {
-    u32 i;
+    u16 qid;
  -    ifcvf_set_config_vector(hw, VIRTIO_MSI_NO_VECTOR);
-    for (i = 0; i < hw->nr_vring; i++) {
-    ifcvf_set_vq_vector(hw, i, VIRTIO_MSI_NO_VECTOR);
+    for (qid = 0; qid < hw->nr_vring; qid++) {
+    if (hw->vring[qid].irq != -EINVAL)
+    synchronize_irq(hw->vring[qid].irq);
  }
  }
  +static void synchronize_vqs_reused_irq(struct ifcvf_hw *hw)
+{
+    if (hw->vqs_reused_irq != -EINVAL)
+    synchronize_irq(hw->vqs_reused_irq);
+}
+
+static void synchronize_vq_irq(struct ifcvf_hw *hw)
+{
+    u8 status = hw->msix_vector_status;
+
+    if (status == MSIX_VECTOR_PER_VQ_AND_CONFIG)
+    synchronize_per_vq_irq(hw);
+    else
+    synchronize_vqs_reused_irq(hw);
+}



I wonder if we need to go with such complicated ways,can we 
synchronize through the vectors like virtio-pci did?


    for (i = 0; i < vp_dev->msix_vectors; ++i)
synchronize_irq(pci_irq_vector(vp_dev->pci_dev, i));
?

I can record the number of msix_vectors and sycn irq based on it in V2.




+
+static void synchronize_config_irq(struct ifcvf_hw *hw)
+{
+    if (hw->config_irq != -EINVAL)
+    synchronize_irq(hw->config_irq);
+}
+
+static void ifcvf_reset_vring(struct ifcvf_hw *hw)
+{
+    u16 qid;
+
+    for (qid = 0; qid < hw->nr_vring; qid++) {
+    synchronize_vq_irq(hw);


Since IRQ could be shared, this will result extra complexity, like a 
irq could be flushed multiple times?
No for this code path, E.g., if the all vqs share one irq, it will only 
be flushed once in synchronize_vqs_reused_irq()


Thanks


Thanks



+ hw->vring[qid].cb.callback = NULL;
+    hw->vring[qid].cb.private = NULL;
+    ifcvf_set_vq_vector(hw, qid, VIRTIO_MSI_NO_VECTOR);
+    }
+}
+
+static void ifcvf_reset_config_handler(struct ifcvf_hw *hw)
+{
+    synchronize_config_irq(hw);
+    hw->config_cb.callback = NULL;
+    hw->config_cb.private = NULL;
+    ifcvf_set_config_vector(hw, VIRTIO_MSI_NO_VECTOR);
+}
+
  void ifcvf_stop_hw(struct ifcvf_hw *hw)
  {
-    ifcvf_hw_disable(hw);
-    ifcvf_reset(hw);
+    ifcvf_reset_vring(hw);
+    ifcvf_reset_config_handler(hw);
  }
    void ifcvf_notify_queue(struct ifcvf_hw *hw, u16 qid)
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c 
b/drivers/vdpa/ifcvf/ifcvf_main.c

index 968687159e44..15c6157ee841 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -346,33 +346,6 @@ static int ifcvf_request_irq(struct ifcvf_hw *vf)
  return 0;
  }
  -static int ifcvf_stop_datapath(struct ifcvf_adapter *adapter)
-{
-    struct ifcvf_hw *vf = adapter->vf;
-    int i;
-
-    for (i = 0; i < vf->nr_vring; i++)
-    vf->vring[i].cb.callback = NULL;
-
-    ifcvf_stop_hw(vf);
-
-    return 0;
-}
-
-static void ifcvf_reset_vring(struct ifcvf_adapter *adapter)
-{
-    struct ifcvf_hw *vf = adapter->vf;
-    int i;
-
-    for (i = 0; i < vf->nr_vring; i++) {
-    vf->vring[i].last_avail_idx = 0;
-    vf->vring[i].cb.callback = NULL;
-    vf->vring[i].cb.private = NULL;
-    }
-
-    ifcvf_reset(vf);
-

Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-27 Thread Xuan Zhuo
On Thu, 27 Apr 2023 16:49:58 +0800, Wenliang Wang 
 wrote:
> On 4/27/23 4:23 PM, Michael S. Tsirkin wrote:
> > On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote:
> >> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin"  
> >> wrote:
> >>> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
>  On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang 
>   wrote:
> >
> >
> > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> >> On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang 
> >>  wrote:
> >>> For multi-queue and large rx-ring-size use case, the following error
> >>
> >> Cound you give we one number for example?
> >
> > 128 queues and 16K queue_size is typical.
> >
> >>
> >>> occurred when free_unused_bufs:
> >>> rcu: INFO: rcu_sched self-detected stall on CPU.
> >>>
> >>> Signed-off-by: Wenliang Wang 
> >>> ---
> >>>drivers/net/virtio_net.c | 1 +
> >>>1 file changed, 1 insertion(+)
> >>>
> >>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>> index ea1bd4bb326d..21d8382fd2c7 100644
> >>> --- a/drivers/net/virtio_net.c
> >>> +++ b/drivers/net/virtio_net.c
> >>> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct 
> >>> virtnet_info *vi)
> >>>   struct virtqueue *vq = vi->rq[i].vq;
> >>>   while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> >>>   virtnet_rq_free_unused_buf(vq, buf);
> >>> + schedule();
> >>
> >> Just for rq?
> >>
> >> Do we need to do the same thing for sq?
> > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> >
> > Sq unused buffers are much less, so do the same for sq is optional.
> 
>  I got.
> 
>  I think we should look for a way, compatible with the less queues or the 
>  smaller
>  rings. Calling schedule() directly may be not a good way.
> 
>  Thanks.
> >>>
> >>> Why isn't it a good way?
> >>
> >> For the small ring, I don't think it is a good way, maybe we only deal 
> >> with one
> >> buf, then call schedule().
> >>
> >> We can call the schedule() after processing a certain number of buffers,
> >> or check need_resched () first.
> >>
> >> Thanks.
> >
> >
> > Wenliang, does
> >  if (need_resched())
> >  schedule();
> > fix the issue for you?
> >
> Yeah, it works better.

I prefer to use it in combination with a fixed number(such as 256).
Every time 256 buffers are processed, check need_resched().
This can accommodate large rings and small rings.

Also, it is necessary to add similar logic to sq.  Although the possibility is
low, it is possible that the same problem will occur.

Thanks.

> >
> >>
> >>
> >>>
> 
> >
> >>
> >> Thanks.
> >>
> >>
> >>>   }
> >>>}
> >>>
> >>> --
> >>> 2.20.1
> >>>
> >>>
> >
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/5] get_driver_features from virito registers

2023-04-27 Thread Zhu, Lingshan



On 4/26/2023 12:02 PM, Jason Wang wrote:


在 2023/4/1 04:48, Zhu Lingshan 写道:

This commit implements a new function ifcvf_get_driver_feature()
which read driver_features from virtio registers.

To be less ambiguous, ifcvf_set_features() is renamed to
ifcvf_set_driver_features(), and ifcvf_get_features()
is renamed to ifcvf_get_dev_features() which returns
the provisioned vDPA device features.

Signed-off-by: Zhu Lingshan 
---
  drivers/vdpa/ifcvf/ifcvf_base.c | 38 +
  drivers/vdpa/ifcvf/ifcvf_base.h |  5 +++--
  drivers/vdpa/ifcvf/ifcvf_main.c |  9 +---
  3 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c 
b/drivers/vdpa/ifcvf/ifcvf_base.c

index 6c5650f73007..546e923bcd16 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.c
+++ b/drivers/vdpa/ifcvf/ifcvf_base.c
@@ -204,11 +204,29 @@ u64 ifcvf_get_hw_features(struct ifcvf_hw *hw)
  return features;
  }
  -u64 ifcvf_get_features(struct ifcvf_hw *hw)
+/* return provisioned vDPA dev features */
+u64 ifcvf_get_dev_features(struct ifcvf_hw *hw)
  {
  return hw->dev_features;
  }
  +u64 ifcvf_get_driver_features(struct ifcvf_hw *hw)
+{
+    struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
+    u32 features_lo, features_hi;
+    u64 features;
+
+    vp_iowrite32(0, &cfg->device_feature_select);
+    features_lo = vp_ioread32(&cfg->guest_feature);
+
+    vp_iowrite32(1, &cfg->device_feature_select);
+    features_hi = vp_ioread32(&cfg->guest_feature);
+
+    features = ((u64)features_hi << 32) | features_lo;
+
+    return features;
+}



This duplicates with the logic ifcvf_get_hw_features(), it would be 
simpler if we just do a rename.
Yes, they look very similar. ifcvf_get_hw_features() reads 
virtio_pci_common_cfg.device_feature

and ifcvf_get_driver_features reads virtio_pci_common_cfg.driver_feature.

Do you suggest we merge these two functions? something like this may 
look chaotic:


u64 ifcvf_get_features(struct ifcvf_hw *hw, bool device_feature)
{
    struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
    u32 features_lo, features_hi;
    u64 features;

    if (device_feature) {
vp_iowrite32(0, &cfg->device_feature_select);
features_lo = vp_ioread32(&cfg->guest_feature);
       vp_iowrite32(1, &cfg->device_feature_select);
       features_hi = vp_ioread32(&cfg->guest_feature);
    } else {
        vp_iowrite32(0, &cfg->device_feature_select);
    features_lo = vp_ioread32(&cfg->guest_feature);

    vp_iowrite32(1, &cfg->device_feature_select);
    features_hi = vp_ioread32(&cfg->guest_feature);

    }

features = ((u64)features_hi << 32) | features_lo;

    return features;
}

Maybe separate functions looks better.


Thanks



+
  int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features)
  {
  if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)) && features) {
@@ -275,7 +293,7 @@ void ifcvf_write_dev_config(struct ifcvf_hw *hw, 
u64 offset,

  vp_iowrite8(*p++, hw->dev_cfg + offset + i);
  }
  -static void ifcvf_set_features(struct ifcvf_hw *hw, u64 features)
+void ifcvf_set_driver_features(struct ifcvf_hw *hw, u64 features)
  {
  struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
  @@ -286,19 +304,6 @@ static void ifcvf_set_features(struct ifcvf_hw 
*hw, u64 features)

  vp_iowrite32(features >> 32, &cfg->guest_feature);
  }
  -static int ifcvf_config_features(struct ifcvf_hw *hw)
-{
-    ifcvf_set_features(hw, hw->req_features);
-    ifcvf_add_status(hw, VIRTIO_CONFIG_S_FEATURES_OK);
-
-    if (!(ifcvf_get_status(hw) & VIRTIO_CONFIG_S_FEATURES_OK)) {
-    IFCVF_ERR(hw->pdev, "Failed to set FEATURES_OK status\n");
-    return -EIO;
-    }
-
-    return 0;
-}
-
  u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid)
  {
  struct ifcvf_lm_cfg __iomem *ifcvf_lm;
@@ -387,9 +392,6 @@ int ifcvf_start_hw(struct ifcvf_hw *hw)
  ifcvf_add_status(hw, VIRTIO_CONFIG_S_ACKNOWLEDGE);
  ifcvf_add_status(hw, VIRTIO_CONFIG_S_DRIVER);
  -    if (ifcvf_config_features(hw) < 0)
-    return -EINVAL;
-
  ifcvf_add_status(hw, VIRTIO_CONFIG_S_DRIVER_OK);
    return 0;
diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h 
b/drivers/vdpa/ifcvf/ifcvf_base.h

index d545a9411143..cb19196c3ece 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.h
+++ b/drivers/vdpa/ifcvf/ifcvf_base.h
@@ -69,7 +69,6 @@ struct ifcvf_hw {
  phys_addr_t notify_base_pa;
  u32 notify_off_multiplier;
  u32 dev_type;
-    u64 req_features;
  u64 hw_features;
  /* provisioned device features */
  u64 dev_features;
@@ -122,7 +121,7 @@ u8 ifcvf_get_status(struct ifcvf_hw *hw);
  void ifcvf_set_status(struct ifcvf_hw *hw, u8 status);
  void io_write64_twopart(u64 val, u32 *lo, u32 *hi);
  void ifcvf_reset(struct ifcvf_hw *hw);
-u64 ifcvf_get_features(struct ifcvf_hw *hw);
+u64 ifcvf_get_dev_features(struct ifcvf_hw *hw);
  u64 ifcvf_get_hw_features(struct ifcvf_hw *hw);
  int ifcvf_verify_min_features(struct ifcvf_hw *hw, u6

Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-27 Thread Michael S. Tsirkin
On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote:
> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin"  
> wrote:
> > On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> > > On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang 
> > >  wrote:
> > > >
> > > >
> > > > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang 
> > > > >  wrote:
> > > > >> For multi-queue and large rx-ring-size use case, the following error
> > > > >
> > > > > Cound you give we one number for example?
> > > >
> > > > 128 queues and 16K queue_size is typical.
> > > >
> > > > >
> > > > >> occurred when free_unused_bufs:
> > > > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > > > >>
> > > > >> Signed-off-by: Wenliang Wang 
> > > > >> ---
> > > > >>   drivers/net/virtio_net.c | 1 +
> > > > >>   1 file changed, 1 insertion(+)
> > > > >>
> > > > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > > > >> --- a/drivers/net/virtio_net.c
> > > > >> +++ b/drivers/net/virtio_net.c
> > > > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct 
> > > > >> virtnet_info *vi)
> > > > >>  struct virtqueue *vq = vi->rq[i].vq;
> > > > >>  while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > > > >>  virtnet_rq_free_unused_buf(vq, buf);
> > > > >> +schedule();
> > > > >
> > > > > Just for rq?
> > > > >
> > > > > Do we need to do the same thing for sq?
> > > > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> > > >
> > > > Sq unused buffers are much less, so do the same for sq is optional.
> > >
> > > I got.
> > >
> > > I think we should look for a way, compatible with the less queues or the 
> > > smaller
> > > rings. Calling schedule() directly may be not a good way.
> > >
> > > Thanks.
> >
> > Why isn't it a good way?
> 
> For the small ring, I don't think it is a good way, maybe we only deal with 
> one
> buf, then call schedule().
> 
> We can call the schedule() after processing a certain number of buffers,
> or check need_resched () first.
> 
> Thanks.


Wenliang, does
if (need_resched())
schedule();
fix the issue for you?


> 
> 
> >
> > >
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > >
> > > > >>  }
> > > > >>   }
> > > > >>
> > > > >> --
> > > > >> 2.20.1
> > > > >>
> >

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-27 Thread Xuan Zhuo
On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin"  
wrote:
> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> > On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang 
> >  wrote:
> > >
> > >
> > > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang 
> > > >  wrote:
> > > >> For multi-queue and large rx-ring-size use case, the following error
> > > >
> > > > Cound you give we one number for example?
> > >
> > > 128 queues and 16K queue_size is typical.
> > >
> > > >
> > > >> occurred when free_unused_bufs:
> > > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > > >>
> > > >> Signed-off-by: Wenliang Wang 
> > > >> ---
> > > >>   drivers/net/virtio_net.c | 1 +
> > > >>   1 file changed, 1 insertion(+)
> > > >>
> > > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > > >> --- a/drivers/net/virtio_net.c
> > > >> +++ b/drivers/net/virtio_net.c
> > > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info 
> > > >> *vi)
> > > >>struct virtqueue *vq = vi->rq[i].vq;
> > > >>while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > > >>virtnet_rq_free_unused_buf(vq, buf);
> > > >> +  schedule();
> > > >
> > > > Just for rq?
> > > >
> > > > Do we need to do the same thing for sq?
> > > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> > >
> > > Sq unused buffers are much less, so do the same for sq is optional.
> >
> > I got.
> >
> > I think we should look for a way, compatible with the less queues or the 
> > smaller
> > rings. Calling schedule() directly may be not a good way.
> >
> > Thanks.
>
> Why isn't it a good way?

For the small ring, I don't think it is a good way, maybe we only deal with one
buf, then call schedule().

We can call the schedule() after processing a certain number of buffers,
or check need_resched () first.

Thanks.



>
> >
> > >
> > > >
> > > > Thanks.
> > > >
> > > >
> > > >>}
> > > >>   }
> > > >>
> > > >> --
> > > >> 2.20.1
> > > >>
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-27 Thread Michael S. Tsirkin
On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang 
>  wrote:
> >
> >
> > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang 
> > >  wrote:
> > >> For multi-queue and large rx-ring-size use case, the following error
> > >
> > > Cound you give we one number for example?
> >
> > 128 queues and 16K queue_size is typical.
> >
> > >
> > >> occurred when free_unused_bufs:
> > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > >>
> > >> Signed-off-by: Wenliang Wang 
> > >> ---
> > >>   drivers/net/virtio_net.c | 1 +
> > >>   1 file changed, 1 insertion(+)
> > >>
> > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > >> --- a/drivers/net/virtio_net.c
> > >> +++ b/drivers/net/virtio_net.c
> > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info 
> > >> *vi)
> > >>  struct virtqueue *vq = vi->rq[i].vq;
> > >>  while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > >>  virtnet_rq_free_unused_buf(vq, buf);
> > >> +schedule();
> > >
> > > Just for rq?
> > >
> > > Do we need to do the same thing for sq?
> > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> >
> > Sq unused buffers are much less, so do the same for sq is optional.
> 
> I got.
> 
> I think we should look for a way, compatible with the less queues or the 
> smaller
> rings. Calling schedule() directly may be not a good way.
> 
> Thanks.

Why isn't it a good way?

> 
> >
> > >
> > > Thanks.
> > >
> > >
> > >>  }
> > >>   }
> > >>
> > >> --
> > >> 2.20.1
> > >>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/5] virt queue ops take immediate actions

2023-04-27 Thread Zhu, Lingshan



On 4/26/2023 11:39 AM, Jason Wang wrote:


在 2023/4/1 04:48, Zhu Lingshan 写道:

In this commit, virtqueue operations including:
set_vq_num(), set_vq_address(), set_vq_ready()
and get_vq_ready() access PCI registers directly
to take immediate actions.

Signed-off-by: Zhu Lingshan 
---
  drivers/vdpa/ifcvf/ifcvf_base.c | 58 -
  drivers/vdpa/ifcvf/ifcvf_base.h | 10 +++---
  drivers/vdpa/ifcvf/ifcvf_main.c | 16 +++--
  3 files changed, 45 insertions(+), 39 deletions(-)

diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c 
b/drivers/vdpa/ifcvf/ifcvf_base.c

index 5563b3a773c7..6c5650f73007 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.c
+++ b/drivers/vdpa/ifcvf/ifcvf_base.c
@@ -329,31 +329,49 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 
qid, u16 num)

  return 0;
  }
  -static int ifcvf_hw_enable(struct ifcvf_hw *hw)
+void ifcvf_set_vq_num(struct ifcvf_hw *hw, u16 qid, u32 num)
  {
-    struct virtio_pci_common_cfg __iomem *cfg;
-    u32 i;
+    struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
  -    cfg = hw->common_cfg;
-    for (i = 0; i < hw->nr_vring; i++) {
-    if (!hw->vring[i].ready)
-    break;
+    vp_iowrite16(qid, &cfg->queue_select);
+    vp_iowrite16(num, &cfg->queue_size);
+}
  -    vp_iowrite16(i, &cfg->queue_select);
-    vp_iowrite64_twopart(hw->vring[i].desc, &cfg->queue_desc_lo,
- &cfg->queue_desc_hi);
-    vp_iowrite64_twopart(hw->vring[i].avail, &cfg->queue_avail_lo,
-  &cfg->queue_avail_hi);
-    vp_iowrite64_twopart(hw->vring[i].used, &cfg->queue_used_lo,
- &cfg->queue_used_hi);
-    vp_iowrite16(hw->vring[i].size, &cfg->queue_size);
-    ifcvf_set_vq_state(hw, i, hw->vring[i].last_avail_idx);
-    vp_iowrite16(1, &cfg->queue_enable);
-    }
+int ifcvf_set_vq_address(struct ifcvf_hw *hw, u16 qid, u64 desc_area,
+ u64 driver_area, u64 device_area)
+{
+    struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
+
+    vp_iowrite16(qid, &cfg->queue_select);
+    vp_iowrite64_twopart(desc_area, &cfg->queue_desc_lo,
+ &cfg->queue_desc_hi);
+    vp_iowrite64_twopart(driver_area, &cfg->queue_avail_lo,
+ &cfg->queue_avail_hi);
+    vp_iowrite64_twopart(device_area, &cfg->queue_used_lo,
+ &cfg->queue_used_hi);
    return 0;
  }
  +bool ifcvf_get_vq_ready(struct ifcvf_hw *hw, u16 qid)
+{
+    struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
+    u16 queue_enable;
+
+    vp_iowrite16(qid, &cfg->queue_select);
+    queue_enable = vp_ioread16(&cfg->queue_enable);
+
+    return (bool)queue_enable;
+}
+
+void ifcvf_set_vq_ready(struct ifcvf_hw *hw, u16 qid, bool ready)
+{
+    struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
+
+    vp_iowrite16(qid, &cfg->queue_select);
+    vp_iowrite16(ready, &cfg->queue_enable);
+}
+
  static void ifcvf_hw_disable(struct ifcvf_hw *hw)
  {
  u32 i;
@@ -366,16 +384,12 @@ static void ifcvf_hw_disable(struct ifcvf_hw *hw)
    int ifcvf_start_hw(struct ifcvf_hw *hw)
  {
-    ifcvf_reset(hw);



This seems unrelated to the immediate actions?

This is because we must not reset the device after vq settings already
take affections, e.g., we must not reset the hw after set_vq_address or
set_vq_ready.

Thanks


The rest looks good.

Thanks



ifcvf_add_status(hw, VIRTIO_CONFIG_S_ACKNOWLEDGE);
  ifcvf_add_status(hw, VIRTIO_CONFIG_S_DRIVER);
    if (ifcvf_config_features(hw) < 0)
  return -EINVAL;
  -    if (ifcvf_hw_enable(hw) < 0)
-    return -EINVAL;
-
  ifcvf_add_status(hw, VIRTIO_CONFIG_S_DRIVER_OK);
    return 0;
diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h 
b/drivers/vdpa/ifcvf/ifcvf_base.h

index c20d1c40214e..d545a9411143 100644
--- a/drivers/vdpa/ifcvf/ifcvf_base.h
+++ b/drivers/vdpa/ifcvf/ifcvf_base.h
@@ -47,12 +47,7 @@
  #define MSIX_VECTOR_DEV_SHARED    3
    struct vring_info {
-    u64 desc;
-    u64 avail;
-    u64 used;
-    u16 size;
  u16 last_avail_idx;
-    bool ready;
  void __iomem *notify_addr;
  phys_addr_t notify_pa;
  u32 irq;
@@ -137,4 +132,9 @@ int ifcvf_probed_virtio_net(struct ifcvf_hw *hw);
  u32 ifcvf_get_config_size(struct ifcvf_hw *hw);
  u16 ifcvf_set_vq_vector(struct ifcvf_hw *hw, u16 qid, int vector);
  u16 ifcvf_set_config_vector(struct ifcvf_hw *hw, int vector);
+void ifcvf_set_vq_num(struct ifcvf_hw *hw, u16 qid, u32 num);
+int ifcvf_set_vq_address(struct ifcvf_hw *hw, u16 qid, u64 desc_area,
+ u64 driver_area, u64 device_area);
+bool ifcvf_get_vq_ready(struct ifcvf_hw *hw, u16 qid);
+void ifcvf_set_vq_ready(struct ifcvf_hw *hw, u16 qid, bool ready);
  #endif /* _IFCVF_H_ */
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c 
b/drivers/vdpa/ifcvf/ifcvf_main.c

index 7f78c47e40d6..1357c67014ab 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -382,10 +382,6 @@ static void ifcvf_reset_vring(struct 
ifcvf_ada

Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-27 Thread Xuan Zhuo
On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang 
 wrote:
>
>
> On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang 
> >  wrote:
> >> For multi-queue and large rx-ring-size use case, the following error
> >
> > Cound you give we one number for example?
>
> 128 queues and 16K queue_size is typical.
>
> >
> >> occurred when free_unused_bufs:
> >> rcu: INFO: rcu_sched self-detected stall on CPU.
> >>
> >> Signed-off-by: Wenliang Wang 
> >> ---
> >>   drivers/net/virtio_net.c | 1 +
> >>   1 file changed, 1 insertion(+)
> >>
> >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >> index ea1bd4bb326d..21d8382fd2c7 100644
> >> --- a/drivers/net/virtio_net.c
> >> +++ b/drivers/net/virtio_net.c
> >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> >>struct virtqueue *vq = vi->rq[i].vq;
> >>while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> >>virtnet_rq_free_unused_buf(vq, buf);
> >> +  schedule();
> >
> > Just for rq?
> >
> > Do we need to do the same thing for sq?
> Rq buffers are pre-allocated, take seconds to free rq unused buffers.
>
> Sq unused buffers are much less, so do the same for sq is optional.

I got.

I think we should look for a way, compatible with the less queues or the smaller
rings. Calling schedule() directly may be not a good way.

Thanks.


>
> >
> > Thanks.
> >
> >
> >>}
> >>   }
> >>
> >> --
> >> 2.20.1
> >>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization