Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

2017-06-16 Thread Michael S. Tsirkin
On Fri, Jun 16, 2017 at 07:04:27PM +0200, Maxime Coquelin wrote:
> 
> 
> On 06/16/2017 05:19 PM, Michael S. Tsirkin wrote:
> > On Fri, Jun 16, 2017 at 04:57:01PM +0800, Jason Wang wrote:
> > > 
> > > 
> > > On 2017年06月16日 11:22, Michael S. Tsirkin wrote:
> > > > > I think the issues can be solved by VIRTIO_F_MAX_CHAIN_SIZE.
> > > > > 
> > > > > For now, how about splitting it into two series of patches:
> > > > > 1) enable 1024 tx queue size for vhost-user, to let the users of 
> > > > > vhost-user
> > > > > to easily use 1024 queue size.
> > > > Fine with me. 1) will get property from user but override it on
> > > > !vhost-user. Do we need a protocol flag? It seems prudent but we get
> > > > back to cross-version migration issues that a04re still pending 
> > > > solution.
> > > > Marc Andre, what's the status of that work?
> > > > 
> > > > > 2) enable VIRTIO_F_MAX_CHAIN_SIZE,  to enhance robustness.
> > > > Rather, to support it for more backends.
> > > 
> > > Ok, if we want to support different values of max chain size in the 
> > > future.
> > > It would be problematic for migration of cross backends, consider the case
> > > when migrating from 2048 (vhost-user) to 1024 (qemu/vhost-kernel).
> > > 
> > > Thanks
> > 
> > That's already a problem, and it's growing with each new feature.
> > Maxime looked at supporting vhost-user backends cross-version migration,
> > I think we must merge some solution sooner rather than later, preferably
> > by the next release.
> > 
> > Maxime, any update here? Do we need a meeting to reach consensus?
> 
> No update, I haven't found time to progress on the topic yet.
> 
> For those who aren't aware of my initial proposal, you may find it here:
> https://www.spinics.net/linux/fedora/libvir/msg142668.html
> 
> If my understanding is correct, you were concerned about the complexity of
> my
> proposal which involved too many layers. Your suggestion was to have a tool
> provided with qemu that would connect to vhost-user socket and query the
> backend capabilities.
> I'm not 100% clear how it would work, as the trend is to start the backend
> in
> client mode, meaning QEMU creates the socket. In this case, should the tool
> create the socket and management tool request the backend to connect to it?
> 
> I think it could make sense to have a meeting, but maybe we should first
> discuss the solutions on the list for efficiency.
> 
> For the delivery, what is QEMU v2.10 planned release date?
> 
> Note that my solution doesn't involve QEMU, so it would not be tight to QEMU
> release date. But, that doesn't mean it would be delivered sooner than
> your solution.
> 
> Maxime

I'd say let's go with your proposal (Solution 3 above).

-- 
MST

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

2017-06-16 Thread Maxime Coquelin



On 06/16/2017 05:19 PM, Michael S. Tsirkin wrote:

On Fri, Jun 16, 2017 at 04:57:01PM +0800, Jason Wang wrote:



On 2017年06月16日 11:22, Michael S. Tsirkin wrote:

I think the issues can be solved by VIRTIO_F_MAX_CHAIN_SIZE.

For now, how about splitting it into two series of patches:
1) enable 1024 tx queue size for vhost-user, to let the users of vhost-user
to easily use 1024 queue size.

Fine with me. 1) will get property from user but override it on
!vhost-user. Do we need a protocol flag? It seems prudent but we get
back to cross-version migration issues that a04re still pending solution.
Marc Andre, what's the status of that work?


2) enable VIRTIO_F_MAX_CHAIN_SIZE,  to enhance robustness.

Rather, to support it for more backends.


Ok, if we want to support different values of max chain size in the future.
It would be problematic for migration of cross backends, consider the case
when migrating from 2048 (vhost-user) to 1024 (qemu/vhost-kernel).

Thanks


That's already a problem, and it's growing with each new feature.
Maxime looked at supporting vhost-user backends cross-version migration,
I think we must merge some solution sooner rather than later, preferably
by the next release.

Maxime, any update here? Do we need a meeting to reach consensus?


No update, I haven't found time to progress on the topic yet.

For those who aren't aware of my initial proposal, you may find it here:
https://www.spinics.net/linux/fedora/libvir/msg142668.html

If my understanding is correct, you were concerned about the complexity 
of my

proposal which involved too many layers. Your suggestion was to have a tool
provided with qemu that would connect to vhost-user socket and query the
backend capabilities.
I'm not 100% clear how it would work, as the trend is to start the 
backend in

client mode, meaning QEMU creates the socket. In this case, should the tool
create the socket and management tool request the backend to connect to it?

I think it could make sense to have a meeting, but maybe we should first
discuss the solutions on the list for efficiency.

For the delivery, what is QEMU v2.10 planned release date?

Note that my solution doesn't involve QEMU, so it would not be tight to QEMU
release date. But, that doesn't mean it would be delivered sooner than
your solution.

Maxime

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

2017-06-16 Thread Michael S. Tsirkin
On Fri, Jun 16, 2017 at 04:57:01PM +0800, Jason Wang wrote:
> 
> 
> On 2017年06月16日 11:22, Michael S. Tsirkin wrote:
> > > I think the issues can be solved by VIRTIO_F_MAX_CHAIN_SIZE.
> > > 
> > > For now, how about splitting it into two series of patches:
> > > 1) enable 1024 tx queue size for vhost-user, to let the users of 
> > > vhost-user
> > > to easily use 1024 queue size.
> > Fine with me. 1) will get property from user but override it on
> > !vhost-user. Do we need a protocol flag? It seems prudent but we get
> > back to cross-version migration issues that a04re still pending solution.
> > Marc Andre, what's the status of that work?
> > 
> > > 2) enable VIRTIO_F_MAX_CHAIN_SIZE,  to enhance robustness.
> > Rather, to support it for more backends.
> 
> Ok, if we want to support different values of max chain size in the future.
> It would be problematic for migration of cross backends, consider the case
> when migrating from 2048 (vhost-user) to 1024 (qemu/vhost-kernel).
> 
> Thanks

That's already a problem, and it's growing with each new feature.
Maxime looked at supporting vhost-user backends cross-version migration,
I think we must merge some solution sooner rather than later, preferably
by the next release.

Maxime, any update here? Do we need a meeting to reach consensus?

> > 
> > > Best,
> > > Wei
> > -- 

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH v2 1/2] virtio-net: enable configurable tx queue size

2017-06-16 Thread Michael S. Tsirkin
On Fri, Jun 16, 2017 at 06:48:38PM +0800, Wei Wang wrote:
> This patch enables the virtio-net tx queue size to be configurable
> between 256 (the default queue size) and 1024 by the user when the
> vhost-user backend is used.
> 
> Currently, the maximum tx queue size for other backends is 512 due
> to the following limitations:
> - QEMU backend: the QEMU backend implementation in some cases may
> send 1024+1 iovs to writev.
> - Vhost_net backend: there are possibilities that the guest sends
> a vring_desc of memory which corsses a MemoryRegion thereby
> generating more than 1024 iovs in total after translattion from
> guest-physical address in the backend.
> 
> Signed-off-by: Wei Wang 
> ---
>  hw/net/virtio-net.c| 46 
> ++
>  include/hw/virtio/virtio-net.h |  1 +
>  2 files changed, 39 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 7d091c9..e1a08fd 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -33,8 +33,11 @@
>  
>  /* previously fixed value */
>  #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> +#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
> +
>  /* for now, only allow larger queues; with virtio-1, guest can downsize */
>  #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
> +#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
>  
>  /*
>   * Calculate the number of bytes up to and including the given 'field' of
> @@ -1491,18 +1494,33 @@ static void virtio_net_tx_bh(void *opaque)
>  static void virtio_net_add_queue(VirtIONet *n, int index)
>  {
>  VirtIODevice *vdev = VIRTIO_DEVICE(n);
> +NetClientState *nc = qemu_get_queue(n->nic);
>  
>  n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size,
> virtio_net_handle_rx);
> +
> +/*
> + * Currently, backends other than vhost-user don't support 1024 queue
> + * size.
> + */
> +if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE &&
> +nc->peer->info->type != NET_CLIENT_DRIVER_VHOST_USER) {
> +fprintf(stderr, "warning: %s: queue size %d not supported\n",
> +__func__, n->net_conf.tx_queue_size);
> +n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE;
> +}
> +

Also, I suspect we can get here with no peer, and above will crash.
It seems ugly to do this on each virtio_net_add_queue.
How about moving this to realize?




>  if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) {
>  n->vqs[index].tx_vq =
> -virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer);
> +virtio_add_queue(vdev, n->net_conf.tx_queue_size,
> + virtio_net_handle_tx_timer);
>  n->vqs[index].tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
>virtio_net_tx_timer,
>>vqs[index]);
>  } else {
>  n->vqs[index].tx_vq =
> -virtio_add_queue(vdev, 256, virtio_net_handle_tx_bh);
> +virtio_add_queue(vdev, n->net_conf.tx_queue_size,
> + virtio_net_handle_tx_bh);
>  n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, >vqs[index]);
>  }
>  
> @@ -1910,6 +1928,17 @@ static void virtio_net_device_realize(DeviceState 
> *dev, Error **errp)
>  return;
>  }
>  
> +if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_MIN_SIZE ||
> +n->net_conf.tx_queue_size > VIRTQUEUE_MAX_SIZE ||
> +!is_power_of_2(n->net_conf.tx_queue_size)) {
> +error_setg(errp, "Invalid tx_queue_size (= %" PRIu16 "), "
> +   "must be a power of 2 between %d and %d",
> +   n->net_conf.tx_queue_size, VIRTIO_NET_TX_QUEUE_MIN_SIZE,
> +   VIRTQUEUE_MAX_SIZE);
> +virtio_cleanup(vdev);
> +return;
> +}
> +
>  n->max_queues = MAX(n->nic_conf.peers.queues, 1);
>  if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
>  error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> @@ -1930,17 +1959,11 @@ static void virtio_net_device_realize(DeviceState 
> *dev, Error **errp)
>  error_report("Defaulting to \"bh\"");
>  }
>  
> -for (i = 0; i < n->max_queues; i++) {
> -virtio_net_add_queue(n, i);
> -}
> -
> -n->ctrl_vq = virtio_add_queue(vdev, 64, virtio_net_handle_ctrl);
>  qemu_macaddr_default_if_unset(>nic_conf.macaddr);
>  memcpy(>mac[0], >nic_conf.macaddr, sizeof(n->mac));
>  n->status = VIRTIO_NET_S_LINK_UP;
>  n->announce_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL,
>   virtio_net_announce_timer, n);
> -
>  if (n->netclient_type) {
>  /*
>   * Happen when virtio_net_set_netclient_name has been called.
> @@ -1952,6 +1975,11 @@ static void virtio_net_device_realize(DeviceState 

Re: [virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver

2017-06-16 Thread Jean-Philippe Brucker
On 16/06/17 09:48, Bharat Bhushan wrote:
> Hi Jean
>> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
>> +  phys_addr_t paddr, size_t size, int prot) {
>> +int ret;
>> +struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +struct virtio_iommu_req_map req = {
>> +.head.type  = VIRTIO_IOMMU_T_MAP,
>> +.address_space  = cpu_to_le32(vdomain->id),
>> +.virt_addr  = cpu_to_le64(iova),
>> +.phys_addr  = cpu_to_le64(paddr),
>> +.size   = cpu_to_le64(size),
>> +};
>> +
>> +pr_debug("map %llu 0x%lx -> 0x%llx (%zu)\n", vdomain->id, iova,
>> + paddr, size);
> 
> A query, when I am tracing above prints I see same physical address is mapped 
> with two different virtual address, do you know why kernel does this?

That really depends which driver is calling into viommu. iommu_map is
called from the DMA API, which can be used by any device drivers. Within
an address space, multiple IOVAs pointing to the same PA isn't forbidden.

For example, looking at MAP requests for a virtio-net device, I get the
following trace:

ioas[1] map   0xfff3000 -> 0x8faa (4096)
ioas[1] map   0xfff2000 -> 0x8faa (4096)
ioas[1] map   0xfff1000 -> 0x8faa (4096)
ioas[1] map   0xfff -> 0x8faa (4096)
ioas[1] map   0xffef000 -> 0x8faa (4096)
ioas[1] map   0xffee000 -> 0x8faa (4096)
ioas[1] map   0xffed000 -> 0x8faa (4096)
ioas[1] map   0xffec000 -> 0x8faa (4096)
ioas[1] map   0xffeb000 -> 0x8faa (4096)
ioas[1] map   0xffea000 -> 0x8faa (4096)
ioas[1] map   0xffe8000 -> 0x8faa (8192)
...

During initialization, the virtio-net driver primes the rx queue with
receive buffers, that the host will then fill with network packets. It
calls virtqueue_add_inbuf_ctx to create descriptors on the rx virtqueue
for each buffer. Each buffer is 0x180 bytes here, so one 4k page can
contain around 10 (actually 11, with the last one crossing page boundary).

I guess the call trace goes like this:
 virtnet_open
 try_fill_recv
 add_recvbuf_mergeable
 virtqueue_add_inbuf_ctx
 vring_map_one_sg
 dma_map_page
 __iommu_dma_map

But the IOMMU cannot map fragments of pages, since the granule is 0x1000.
Therefore when virtqueue_add_inbuf_ctx maps the buffer, __iommu_dma_map
aligns address and size on full pages. Someone motivated could probably
optimize this by caching mapped pages and reusing IOVAs, but currently
that's how it goes.

Thanks,
Jean

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] [PATCH v2 1/2] virtio-net: enable configurable tx queue size

2017-06-16 Thread Wei Wang
This patch enables the virtio-net tx queue size to be configurable
between 256 (the default queue size) and 1024 by the user when the
vhost-user backend is used.

Currently, the maximum tx queue size for other backends is 512 due
to the following limitations:
- QEMU backend: the QEMU backend implementation in some cases may
send 1024+1 iovs to writev.
- Vhost_net backend: there are possibilities that the guest sends
a vring_desc of memory which corsses a MemoryRegion thereby
generating more than 1024 iovs in total after translattion from
guest-physical address in the backend.

Signed-off-by: Wei Wang 
---
 hw/net/virtio-net.c| 46 ++
 include/hw/virtio/virtio-net.h |  1 +
 2 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 7d091c9..e1a08fd 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -33,8 +33,11 @@
 
 /* previously fixed value */
 #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
+#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
+
 /* for now, only allow larger queues; with virtio-1, guest can downsize */
 #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
+#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
 
 /*
  * Calculate the number of bytes up to and including the given 'field' of
@@ -1491,18 +1494,33 @@ static void virtio_net_tx_bh(void *opaque)
 static void virtio_net_add_queue(VirtIONet *n, int index)
 {
 VirtIODevice *vdev = VIRTIO_DEVICE(n);
+NetClientState *nc = qemu_get_queue(n->nic);
 
 n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size,
virtio_net_handle_rx);
+
+/*
+ * Currently, backends other than vhost-user don't support 1024 queue
+ * size.
+ */
+if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE &&
+nc->peer->info->type != NET_CLIENT_DRIVER_VHOST_USER) {
+fprintf(stderr, "warning: %s: queue size %d not supported\n",
+__func__, n->net_conf.tx_queue_size);
+n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE;
+}
+
 if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) {
 n->vqs[index].tx_vq =
-virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer);
+virtio_add_queue(vdev, n->net_conf.tx_queue_size,
+ virtio_net_handle_tx_timer);
 n->vqs[index].tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
   virtio_net_tx_timer,
   >vqs[index]);
 } else {
 n->vqs[index].tx_vq =
-virtio_add_queue(vdev, 256, virtio_net_handle_tx_bh);
+virtio_add_queue(vdev, n->net_conf.tx_queue_size,
+ virtio_net_handle_tx_bh);
 n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, >vqs[index]);
 }
 
@@ -1910,6 +1928,17 @@ static void virtio_net_device_realize(DeviceState *dev, 
Error **errp)
 return;
 }
 
+if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_MIN_SIZE ||
+n->net_conf.tx_queue_size > VIRTQUEUE_MAX_SIZE ||
+!is_power_of_2(n->net_conf.tx_queue_size)) {
+error_setg(errp, "Invalid tx_queue_size (= %" PRIu16 "), "
+   "must be a power of 2 between %d and %d",
+   n->net_conf.tx_queue_size, VIRTIO_NET_TX_QUEUE_MIN_SIZE,
+   VIRTQUEUE_MAX_SIZE);
+virtio_cleanup(vdev);
+return;
+}
+
 n->max_queues = MAX(n->nic_conf.peers.queues, 1);
 if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
 error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
@@ -1930,17 +1959,11 @@ static void virtio_net_device_realize(DeviceState *dev, 
Error **errp)
 error_report("Defaulting to \"bh\"");
 }
 
-for (i = 0; i < n->max_queues; i++) {
-virtio_net_add_queue(n, i);
-}
-
-n->ctrl_vq = virtio_add_queue(vdev, 64, virtio_net_handle_ctrl);
 qemu_macaddr_default_if_unset(>nic_conf.macaddr);
 memcpy(>mac[0], >nic_conf.macaddr, sizeof(n->mac));
 n->status = VIRTIO_NET_S_LINK_UP;
 n->announce_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL,
  virtio_net_announce_timer, n);
-
 if (n->netclient_type) {
 /*
  * Happen when virtio_net_set_netclient_name has been called.
@@ -1952,6 +1975,11 @@ static void virtio_net_device_realize(DeviceState *dev, 
Error **errp)
   object_get_typename(OBJECT(dev)), dev->id, n);
 }
 
+for (i = 0; i < n->max_queues; i++) {
+virtio_net_add_queue(n, i);
+}
+n->ctrl_vq = virtio_add_queue(vdev, 64, virtio_net_handle_ctrl);
+
 peer_test_vnet_hdr(n);
 if (peer_has_vnet_hdr(n)) {
 for (i = 0; i < n->max_queues; i++) {
@@ -2089,6 +2117,8 @@ static Property virtio_net_properties[] = {

[virtio-dev] [PATCH v2 2/2] virtio-net: code cleanup

2017-06-16 Thread Wei Wang
Use is_power_of_2(), and remove trailing "." from error_setg().

Signed-off-by: Wei Wang 
---
 hw/net/virtio-net.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index e1a08fd..e4cb5a7 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1919,9 +1919,9 @@ static void virtio_net_device_realize(DeviceState *dev, 
Error **errp)
  */
 if (n->net_conf.rx_queue_size < VIRTIO_NET_RX_QUEUE_MIN_SIZE ||
 n->net_conf.rx_queue_size > VIRTQUEUE_MAX_SIZE ||
-(n->net_conf.rx_queue_size & (n->net_conf.rx_queue_size - 1))) {
+!is_power_of_2(n->net_conf.rx_queue_size)) {
 error_setg(errp, "Invalid rx_queue_size (= %" PRIu16 "), "
-   "must be a power of 2 between %d and %d.",
+   "must be a power of 2 between %d and %d",
n->net_conf.rx_queue_size, VIRTIO_NET_RX_QUEUE_MIN_SIZE,
VIRTQUEUE_MAX_SIZE);
 virtio_cleanup(vdev);
@@ -1942,7 +1942,7 @@ static void virtio_net_device_realize(DeviceState *dev, 
Error **errp)
 n->max_queues = MAX(n->nic_conf.peers.queues, 1);
 if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
 error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
-   "must be a positive integer less than %d.",
+   "must be a positive integer less than %d",
n->max_queues, (VIRTIO_QUEUE_MAX - 1) / 2);
 virtio_cleanup(vdev);
 return;
-- 
2.7.4


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

2017-06-16 Thread Wei Wang

On 06/16/2017 04:57 PM, Jason Wang wrote:



On 2017年06月16日 11:22, Michael S. Tsirkin wrote:

I think the issues can be solved by VIRTIO_F_MAX_CHAIN_SIZE.

For now, how about splitting it into two series of patches:
1) enable 1024 tx queue size for vhost-user, to let the users of 
vhost-user

to easily use 1024 queue size.

Fine with me. 1) will get property from user but override it on
!vhost-user. Do we need a protocol flag? It seems prudent but we get
back to cross-version migration issues that are still pending solution.

What do you have in mind about the protocol flag?

Btw, I just tested the patch of 1), and it works fine with migration 
from the
patched to non-patched version of QEMU. I'll send it out. Please have a 
check.




Marc Andre, what's the status of that work?


2) enable VIRTIO_F_MAX_CHAIN_SIZE,  to enhance robustness.

Rather, to support it for more backends.


Ok, if we want to support different values of max chain size in the 
future. It would be problematic for migration of cross backends, 
consider the case when migrating from 2048 (vhost-user) to 1024 
(qemu/vhost-kernel).




I think that wouldn't be a problem. If there is a possibility to change the
backend resulting in a change of config.max_change_size, a configuration
change notification can be injected to the guest, then guest will read and
get the new value.

Best,
Wei


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v1] virtio-net: enable configurable tx queue size

2017-06-16 Thread Jason Wang



On 2017年06月16日 11:22, Michael S. Tsirkin wrote:

I think the issues can be solved by VIRTIO_F_MAX_CHAIN_SIZE.

For now, how about splitting it into two series of patches:
1) enable 1024 tx queue size for vhost-user, to let the users of vhost-user
to easily use 1024 queue size.

Fine with me. 1) will get property from user but override it on
!vhost-user. Do we need a protocol flag? It seems prudent but we get
back to cross-version migration issues that a04re still pending solution.
Marc Andre, what's the status of that work?


2) enable VIRTIO_F_MAX_CHAIN_SIZE,  to enhance robustness.

Rather, to support it for more backends.


Ok, if we want to support different values of max chain size in the 
future. It would be problematic for migration of cross backends, 
consider the case when migrating from 2048 (vhost-user) to 1024 
(qemu/vhost-kernel).


Thanks




Best,
Wei

--



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



RE: [virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver

2017-06-16 Thread Bharat Bhushan
Hi Jean

> -Original Message-
> From: virtio-dev@lists.oasis-open.org [mailto:virtio-dev@lists.oasis-
> open.org] On Behalf Of Jean-Philippe Brucker
> Sent: Saturday, April 08, 2017 12:53 AM
> To: io...@lists.linux-foundation.org; k...@vger.kernel.org;
> virtualizat...@lists.linux-foundation.org; virtio-dev@lists.oasis-open.org
> Cc: cd...@linaro.org; will.dea...@arm.com; robin.mur...@arm.com;
> lorenzo.pieral...@arm.com; j...@8bytes.org; m...@redhat.com;
> jasow...@redhat.com; alex.william...@redhat.com;
> marc.zyng...@arm.com
> Subject: [virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver
> 
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport. This driver should
> illustrate the initial proposal for virtio-iommu, that you hopefully received
> with it. It handle attach, detach, map and unmap requests.
> 
> The bulk of the code is to create requests and send them through virtio.
> Implementing the IOMMU API is fairly straightforward since the virtio-iommu
> MAP/UNMAP interface is almost identical. I threw in a custom
> map_sg() function which takes up some space, but is optional. The core
> function would send a sequence of map requests, waiting for a reply
> between each mapping. This optimization avoids yielding to the host after
> each map, and instead prepares a batch of requests in the virtio ring and
> kicks the host once.
> 
> It must be applied on top of the probe deferral work for IOMMU, currently
> under discussion. This allows to dissociate early driver detection and device
> probing: device-tree or ACPI is parsed early to find which devices are
> translated by the IOMMU, but the IOMMU itself cannot be probed until the
> core virtio module is loaded.
> 
> Enabling DEBUG makes it extremely verbose at the moment, but it should be
> calmer in next versions.
> 
> Signed-off-by: Jean-Philippe Brucker 
> ---
>  drivers/iommu/Kconfig |  11 +
>  drivers/iommu/Makefile|   1 +
>  drivers/iommu/virtio-iommu.c  | 980
> ++
>  include/uapi/linux/Kbuild |   1 +
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 142 ++
>  6 files changed, 1136 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c  create mode 100644
> include/uapi/linux/virtio_iommu.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index
> 37e204f3d9be..8cd56ee9a93a 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -359,4 +359,15 @@ config MTK_IOMMU_V1
> 
> if unsure, say N here.
> 
> +config VIRTIO_IOMMU
> + tristate "Virtio IOMMU driver"
> + depends on VIRTIO_MMIO
> + select IOMMU_API
> + select INTERVAL_TREE
> + select ARM_DMA_USE_IOMMU if ARM
> + help
> +   Para-virtualised IOMMU driver with virtio.
> +
> +   Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index
> 195f7b997d8e..1199d8475802 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -27,3 +27,4 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-
> smmu.o
>  obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644 index ..1cf4f57b7817
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,980 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307,
> USA.
> + *
> + * Copyright (C) 2017 ARM Limited
> + *
> + * Author: Jean-Philippe Brucker   */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +
> +struct viommu_dev {
> + struct iommu_device iommu;
> + struct device   *dev;
> + struct virtio_device*vdev;
> +
> + struct virtqueue