On Tue, Feb 15, 2022 at 1:34 PM Gautam Dawar wrote:
>
> This patch adds the definition of VIRTIO_F_IN_ORDER feature bit
> in the relevant header file to make it available in QEMU's
> linux standard header file virtio_config.h, which is updated using
> scripts/update-linux-headers.sh
>
>
On Tue, Feb 15, 2022 at 11:29 AM Zhu, Lingshan wrote:
>
>
>
> On 2/15/2022 11:03 AM, Jason Wang wrote:
> > On Tue, Feb 15, 2022 at 10:18 AM Zhu, Lingshan
> > wrote:
> >>
> >>
> >> On 2/14/2022 10:27 PM, Michael S. Tsirkin wrote:
> >>> On Mon, Feb 14, 2022 at 06:01:56PM +0800, Zhu Lingshan
On Mon, Feb 14, 2022 at 10:25 PM Michael S. Tsirkin wrote:
>
> On Mon, Feb 14, 2022 at 03:19:25PM +0800, Jason Wang wrote:
> >
> > 在 2022/2/3 下午3:27, Zhu Lingshan 写道:
> > > On some platforms/devices, there may not be enough MSI vector
> > > slots allocated for virtqueues and config changes. In
On Tue, Feb 15, 2022 at 10:18 AM Zhu, Lingshan wrote:
>
>
>
> On 2/14/2022 10:27 PM, Michael S. Tsirkin wrote:
> > On Mon, Feb 14, 2022 at 06:01:56PM +0800, Zhu Lingshan wrote:
> >>
> >> On 2/14/2022 3:19 PM, Jason Wang wrote:
> >>> 在 2022/2/3 下午3:27, Zhu Lingshan 写道:
> On some
Having pageblock_order >= MAX_ORDER seems to be able to happen in corner
cases and some parts of the kernel are not prepared for it.
For example, Aneesh has shown [1] that such kernels can be compiled on
ppc64 with 64k base pages by setting FORCE_MAX_ZONEORDER=8, which will run
into a
Let's factor out determining the minimum alignment requirement for CMA
and add a helpful comment.
No functional change intended.
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/fadump-internal.h | 5 -
arch/powerpc/kernel/fadump.c | 2 +-
Some places in the kernel don't really expect pageblock_order >=
MAX_ORDER, and it looks like this is only possible in corner cases:
1) CONFIG_DEFERRED_STRUCT_PAGE_INIT we'll end up freeing pageblock_order
pages via __free_pages_core(), which cannot possibly work.
2)
On Thu, Feb 03, 2022 at 05:59:20PM +0800, John Garry wrote:
> Currently the rcache structures are allocated for all IOVA domains, even if
> they do not use "fast" alloc+free interface. This is wasteful of memory.
>
> In addition, fails in init_iova_rcaches() are not handled safely, which is
>
On 2022-02-03 09:59, John Garry wrote:
Currently the rcache structures are allocated for all IOVA domains, even if
they do not use "fast" alloc+free interface. This is wasteful of memory.
In addition, fails in init_iova_rcaches() are not handled safely, which is
less than ideal.
Make "fast"
On Mon, Feb 14, 2022 at 06:01:56PM +0800, Zhu Lingshan wrote:
>
>
> On 2/14/2022 3:19 PM, Jason Wang wrote:
> >
> > 在 2022/2/3 下午3:27, Zhu Lingshan 写道:
> > > On some platforms/devices, there may not be enough MSI vector
> > > slots allocated for virtqueues and config changes. In such a case,
>
On Mon, Feb 14, 2022 at 03:19:25PM +0800, Jason Wang wrote:
>
> 在 2022/2/3 下午3:27, Zhu Lingshan 写道:
> > On some platforms/devices, there may not be enough MSI vector
> > slots allocated for virtqueues and config changes. In such a case,
> > the interrupt sources(virtqueues, config changes) must
On 2/14/2022 3:19 PM, Jason Wang wrote:
在 2022/2/3 下午3:27, Zhu Lingshan 写道:
On some platforms/devices, there may not be enough MSI vector
slots allocated for virtqueues and config changes. In such a case,
the interrupt sources(virtqueues, config changes) must share
an IRQ/vector, to avoid
Based on the spec below, we no longer require the queue size of PCI is
power of 2. In the case of a packed ring, it is reasonable that this
value is not power of 2.
https://github.com/oasis-tcs/virtio-spec/issues/28
And there is a check for this in vring_create_virtqueue_split() . So
this
On Thu, Feb 3, 2022 at 1:56 PM Eli Cohen wrote:
>
> Implement the get_vq_stats calback of vdpa_config_ops to return the
> statistics for a virtqueue.
>
> The statistics are provided as vendor specific statistics where the
> driver provides a pair of attribute name and attribute value.
>
>
Use virtqueue_get_vring_max_size() in virtnet_get_ringparam() to set
tx,rx_max_pending.
Signed-off-by: Xuan Zhuo
---
drivers/net/virtio_net.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index
Sets the default maximum ring num based on virtio_set_max_ring_num().
The default maximum ring num is 1024.
Signed-off-by: Xuan Zhuo
---
drivers/net/virtio_net.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index
This patch implements the reset function of the rx, tx queues.
Based on this function, it is possible to modify the ring num of the
queue. And quickly recycle the buffer in the queue.
In the process of the queue disable, in theory, as long as virtio
supports queue reset, there will be no
modern setup_vq() replaces vring_create_virtqueue() with
vring_setup_virtqueue()
vp_setup_vq() can pass the original vq(from info->vq) to re-enable vq.
Allow direct calls to vp_setup_vq() in virtio_pci_modern.c
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_pci_common.c | 31
Record the maximum queue num supported by the device.
virtio-net can display the maximum (supported by hardware) ring size in
ethtool -g eth0.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_mmio.c | 2 ++
drivers/virtio/virtio_pci_legacy.c | 2 ++
drivers/virtio/virtio_pci_modern.c
Reserve vq->priv during reset. Prevent vp_modern_map_vq_notify() from
being called repeatedly.
Only set vq->priv = NULL in normal setup virtqueue, and keep
vq->priv in the process of re-enable queue.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_pci_modern.c | 8 +---
Add queue_reset in virtio_pci_common_cfg, and add related operation
functions.
For not breaks uABI, add a new struct virtio_pci_common_cfg_reset.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_pci_modern_dev.c | 36 ++
include/linux/virtio_pci_modern.h | 2 ++
This patch separates two functions for freeing sq buf and rq buf from
free_unused_bufs().
When supporting the enable/disable tx/rq queue in the future, it is
necessary to support separate recovery of a sq buf or a rq buf.
Signed-off-by: Xuan Zhuo
---
drivers/net/virtio_net.c | 53
Performing reset on a queue is divided into four steps:
1. reset_vq: reset one vq
2. recycle the buffer from vq by virtqueue_detach_unused_buf()
3. release the ring of the vq by vring_release_virtqueue()
4. enable_reset_vq: re-enable the reset queue
So add two callbacks reset_vq, enable_reset_vq
In the process of queue reset, vq leaves vdev->vqs, so the original
processing logic may miss some vq. So modify the processing method of
releasing vq. Release vq by listing vqs.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_pci_common.c | 22 ++
The purpose of this patch is to make vring packed support re-enable reset
vq.
Based on whether the incoming vq passed by vring_setup_virtqueue() is
NULL or not, distinguish whether it is a normal create virtqueue or
re-enable a reset queue.
When re-enable a reset queue, reuse the original
Added function vring_setup_virtqueue() to allow passing existing vq
without reallocating vq.
The purpose of adding this function is to not break the form of
vring_create_virtqueue().
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 7 ---
include/linux/virtio_ring.h | 37
Support set_ringparam based on virtio queue reset.
The rx,tx_pending required to be passed must be power of 2.
Signed-off-by: Xuan Zhuo
---
drivers/net/virtio_net.c | 50
1 file changed, 50 insertions(+)
diff --git a/drivers/net/virtio_net.c
The purpose of this patch is to make vring split support re-enable reset
vq.
Based on whether the incoming vq passed by vring_setup_virtqueue() is
NULL or not, distinguish whether it is a normal create virtqueue or
re-enable a reset queue.
When re-enable a reset queue, reuse the original
Added helper virtio_set_max_ring_num() to set the upper limit of ring
num when creating a virtqueue.
Can be used to limit ring num before find_vqs() call. Or change ring num
when re-enable reset queue.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 6 ++
Add helper for virtio queue reset.
* virtio_reset_vq: reset a queue individually
* virtio_enable_resetq: enable a reset queue
Signed-off-by: Xuan Zhuo
---
include/linux/virtio_config.h | 38 +++
1 file changed, 38 insertions(+)
diff --git
Added vring_release_virtqueue() to release the ring of the vq.
In this process, vq is removed from the vdev->vqs queue. And the memory
of the ring is released
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 18 +-
include/linux/virtio.h | 12
2
Extract a function __vring_del_virtqueue() from vring_del_virtqueue() to
handle releasing vq's ring.
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c
This patch implements virtio pci support for QUEUE RESET.
Performing reset on a queue is divided into these steps:
1. reset_vq: reset one vq
2. recycle the buffer from vq by virtqueue_detach_unused_buf()
3. release the ring of the vq by vring_release_virtqueue()
4. enable_reset_vq: re-enable the
The virtio spec already supports the virtio queue reset function. This patch set
is to add this function to the kernel. The relevant virtio spec information is
here:
https://github.com/oasis-tcs/virtio-spec/issues/124
Also regarding MMIO support for queue reset, I plan to support it after
Add queue_notify_data in struct virtio_pci_common_cfg, which comes from
here https://github.com/oasis-tcs/virtio-spec/issues/89
For not breaks uABI, add a new struct virtio_pci_common_cfg_notify.
Since I want to add queue_reset after queue_notify_data, I submitted
this patch first.
Added VIRTIO_F_RING_RESET, it came from here
https://github.com/oasis-tcs/virtio-spec/issues/124
Signed-off-by: Xuan Zhuo
---
include/uapi/linux/virtio_config.h | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/uapi/linux/virtio_config.h
Extract vq's initialization function __vring_init_virtqueue() from
__vring_new_virtqueue()
Signed-off-by: Xuan Zhuo
---
drivers/virtio/virtio_ring.c | 61 +---
1 file changed, 42 insertions(+), 19 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c
On Thu, Feb 3, 2022 at 1:56 PM Eli Cohen wrote:
>
> Allows to read vendor statistics of vdpa device. The specific statistics
> data is received by the upstream driver in the form of an attribute
> name/attribute value pairs.
>
> An example of statistics for mlx5_vdpa device are:
>
> received_desc
38 matches
Mail list logo