Re: [PATCH v2 0/5] VSOCK: support mergeable rx buffer in vhost-vsock

2018-12-17 Thread jiangyiwen
On 2018/12/14 0:34, Stefan Hajnoczi wrote: > On Wed, Dec 12, 2018 at 05:25:50PM +0800, jiangyiwen wrote: >> Now vsock only support send/receive small packet, it can't achieve >> high performance. As previous discussed with Jason Wang, I revisit the >> idea of vhost-net about mergeable rx buffer

Re: [PATCH v2 3/5] VSOCK: support receive mergeable rx buffer in guest

2018-12-17 Thread jiangyiwen
On 2018/12/14 0:20, Stefan Hajnoczi wrote: > On Wed, Dec 12, 2018 at 05:31:39PM +0800, jiangyiwen wrote: >> +static struct virtio_vsock_pkt *receive_mergeable(struct virtqueue *vq, >> +struct virtio_vsock *vsock, unsigned int *total_len) >> +{ >> +struct virtio_vsock_pkt *pkt; >> +

Re: [PATCH v2 3/5] VSOCK: support receive mergeable rx buffer in guest

2018-12-17 Thread jiangyiwen
On 2018/12/13 22:29, Michael S. Tsirkin wrote: > On Thu, Dec 13, 2018 at 10:38:09AM +0800, jiangyiwen wrote: >> Hi Michael, >> >> On 2018/12/12 23:31, Michael S. Tsirkin wrote: >>> On Wed, Dec 12, 2018 at 05:31:39PM +0800, jiangyiwen wrote: Guest receive mergeable rx buffer, it can merge

Re: [PATCH v2 2/5] VSOCK: support fill data to mergeable rx buffer in host

2018-12-17 Thread jiangyiwen
On 2018/12/13 23:49, Stefan Hajnoczi wrote: > On Thu, Dec 13, 2018 at 11:08:04AM +0800, jiangyiwen wrote: >> On 2018/12/12 23:37, Michael S. Tsirkin wrote: >>> On Wed, Dec 12, 2018 at 05:29:31PM +0800, jiangyiwen wrote: When vhost support VIRTIO_VSOCK_F_MRG_RXBUF feature, it will merge

Re: [PATCH v2 2/5] VSOCK: support fill data to mergeable rx buffer in host

2018-12-17 Thread jiangyiwen
On 2018/12/13 22:50, Michael S. Tsirkin wrote: > On Thu, Dec 13, 2018 at 11:11:48AM +0800, jiangyiwen wrote: >> On 2018/12/13 3:09, David Miller wrote: >>> From: jiangyiwen >>> Date: Wed, 12 Dec 2018 17:29:31 +0800 >>> diff --git a/include/uapi/linux/virtio_vsock.h

Re: [PATCH v2 2/5] VSOCK: support fill data to mergeable rx buffer in host

2018-12-17 Thread jiangyiwen
On 2018/12/13 22:48, Michael S. Tsirkin wrote: > On Thu, Dec 13, 2018 at 11:08:04AM +0800, jiangyiwen wrote: >> On 2018/12/12 23:37, Michael S. Tsirkin wrote: >>> On Wed, Dec 12, 2018 at 05:29:31PM +0800, jiangyiwen wrote: When vhost support VIRTIO_VSOCK_F_MRG_RXBUF feature, it will

Re: [PATCH net-next 0/3] vhost: accelerate metadata access through vmap()

2018-12-17 Thread Jason Wang
On 2018/12/14 上午4:12, Michael S. Tsirkin wrote: On Thu, Dec 13, 2018 at 06:10:19PM +0800, Jason Wang wrote: Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even

Re: [PATCH net-next 1/3] vhost: generalize adding used elem

2018-12-17 Thread Jason Wang
On 2018/12/14 上午3:41, Michael S. Tsirkin wrote: On Thu, Dec 13, 2018 at 06:10:20PM +0800, Jason Wang wrote: Use one generic vhost_copy_to_user() instead of two dedicated accessor. This will simplify the conversion to fine grain accessors. Signed-off-by: Jason Wang The reason we did it like

Re: [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address

2018-12-17 Thread Jason Wang
On 2018/12/13 下午11:44, Michael S. Tsirkin wrote: On Thu, Dec 13, 2018 at 06:10:22PM +0800, Jason Wang wrote: It was noticed that the copy_user() friends that was used to access virtqueue metdata tends to be very expensive for dataplane implementation like vhost since it involves lots of

Re: [PATCH net-next 0/3] vhost: accelerate metadata access through vmap()

2018-12-17 Thread Jason Wang
On 2018/12/13 下午11:27, Michael S. Tsirkin wrote: On Thu, Dec 13, 2018 at 06:10:19PM +0800, Jason Wang wrote: Hi: This series tries to access virtqueue metadata through kernel virtual address instead of copy_user() friends since they had too much overheads like checks, spec barriers or even

Re: [PATCH net V2 4/4] vhost: log dirty page correctly

2018-12-17 Thread Jason Wang
On 2018/12/13 下午10:31, Michael S. Tsirkin wrote: Just to make sure I understand this. It looks to me we should: - allow passing GIOVA->GPA through UAPI - cache GIOVA->GPA somewhere but still use GIOVA->HVA in device IOTLB for performance Is this what you suggest? Thanks Not really. We

Re: [PATCH v2 5/5] VSOCK: batch sending rx buffer to increase bandwidth

2018-12-17 Thread jiangyiwen
On 2018/12/13 23:17, Stefan Hajnoczi wrote: > On Wed, Dec 12, 2018 at 05:35:27PM +0800, jiangyiwen wrote: >> Batch sending rx buffer can improve total bandwidth. >> >> Signed-off-by: Yiwen Jiang >> --- > > Please send patches with git-send-email --thread --no-chain-reply-to so > that your patch

kernel vhost demands an interrupt from guest when the ring is full in order to enable guest to submit new packets to the queue

2018-12-17 Thread Steven Luong (sluong) via Virtualization
Folks, We came across a memory race condition between VPP vhost driver and the kernel vhost. VPP is running a tap interface over vhost backend. In this case, VPP is acting as the vhost driver mode and the kernel vhost is acting as the vhost device mode. In the kernel vhost’s TX traffic