Re: [Qemu-devel] [PATCH v12 2/7] virtio-pmem: Add virtio pmem driver

2019-06-12 Thread Pankaj Gupta
> > Hi Pankaj, > > On Tue, 11 Jun 2019 23:34:50 -0400 (EDT) > Pankaj Gupta wrote: > > > Hi Cornelia, > > > > > On Tue, 11 Jun 2019 22:07:57 +0530 > > > Pankaj Gupta wrote: > > > > > > + err1 = virtqueue_kick(vpmem->req_vq); > > > > + spin_unlock_irqrestore(&vpmem->pmem_lock, f

[PATCH v5 0/8] s390: virtio: support protected virtualization

2019-06-12 Thread Halil Pasic
Enhanced virtualization protection technology may require the use of bounce buffers for I/O. While support for this was built into the virtio core, virtio-ccw wasn't changed accordingly. Some background on technology (not part of this series) and the terminology used. * Protected Virtualization (

[PATCH v5 1/8] s390/mm: force swiotlb for protected virtualization

2019-06-12 Thread Halil Pasic
On s390, protected virtualization guests have to use bounced I/O buffers. That requires some plumbing. Let us make sure, any device that uses DMA API with direct ops correctly is spared from the problems, that a hypervisor attempting I/O to a non-shared page would bring. Signed-off-by: Halil Pas

[PATCH v5 2/8] s390/cio: introduce DMA pools to cio

2019-06-12 Thread Halil Pasic
To support protected virtualization cio will need to make sure the memory used for communication with the hypervisor is DMA memory. Let us introduce one global pool for cio. Our DMA pools are implemented as a gen_pool backed with DMA pages. The idea is to avoid each allocation effectively wasting

[PATCH v5 3/8] s390/cio: add basic protected virtualization support

2019-06-12 Thread Halil Pasic
As virtio-ccw devices are channel devices, we need to use the dma area within the common I/O layer for any communication with the hypervisor. Note that we do not need to use that area for control blocks directly referenced by instructions, e.g. the orb. It handles neither QDIO in the common code,

[PATCH v5 4/8] s390/airq: use DMA memory for adapter interrupts

2019-06-12 Thread Halil Pasic
Protected virtualization guests have to use shared pages for airq notifier bit vectors, because hypervisor needs to write these bits. Let us make sure we allocate DMA memory for the notifier bit vectors by replacing the kmem_cache with a dma_cache and kalloc() with cio_dma_zalloc(). Signed-off-by

[PATCH v5 6/8] virtio/s390: add indirection to indicators access

2019-06-12 Thread Halil Pasic
This will come in handy soon when we pull out the indicators from virtio_ccw_device to a memory area that is shared with the hypervisor (in particular for protected virtualization guests). Signed-off-by: Halil Pasic Reviewed-by: Pierre Morel Reviewed-by: Cornelia Huck --- drivers/s390/virtio/v

[PATCH v5 5/8] virtio/s390: use cacheline aligned airq bit vectors

2019-06-12 Thread Halil Pasic
The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let us use it! We actually wanted the vector to span a cacheline all along. Signed-off-by: Halil Pasic Reviewed-by: Christian Borntraeger Reviewed-by: Cornelia Huck --- drivers/s390/virtio/virtio_ccw.c | 3 ++- 1 file changed,

[PATCH v5 8/8] virtio/s390: make airq summary indicators DMA

2019-06-12 Thread Halil Pasic
Hypervisor needs to interact with the summary indicators, so these need to be DMA memory as well (at least for protected virtualization guests). Signed-off-by: Halil Pasic Reviewed-by: Cornelia Huck --- drivers/s390/virtio/virtio_ccw.c | 32 1 file changed, 24 i

[PATCH v5 7/8] virtio/s390: use DMA memory for ccw I/O and classic notifiers

2019-06-12 Thread Halil Pasic
Before virtio-ccw could get away with not using DMA API for the pieces of memory it does ccw I/O with. With protected virtualization this has to change, since the hypervisor needs to read and sometimes also write these pieces of memory. The hypervisor is supposed to poke the classic notifiers, if

[PATCH v13 0/7] virtio pmem driver

2019-06-12 Thread Pankaj Gupta
This patch series is ready to be merged via nvdimm tree as discussed with Dan. We have ack/review on XFS, EXT4 device mapper & VIRTIO patches. This version has minor changes in patch 2. Keeping all the existing r-o-bs. Jakob CCed also tested the patch series and confirmed the working of v9.

[PATCH v13 1/7] libnvdimm: nd_region flush callback support

2019-06-12 Thread Pankaj Gupta
This patch adds functionality to perform flush from guest to host over VIRTIO. We are registering a callback based on 'nd_region' type. virtio_pmem driver requires this special flush function. For rest of the region types we are registering existing flush function. Report error returned by host fsy

[PATCH v13 2/7] virtio-pmem: Add virtio pmem driver

2019-06-12 Thread Pankaj Gupta
This patch adds virtio-pmem driver for KVM guest. Guest reads the persistent memory range information from Qemu over VIRTIO and registers it on nvdimm_bus. It also creates a nd_region object with the persistent memory range information so that existing 'nvdimm/pmem' driver can reserve this into sy

[PATCH v13 3/7] libnvdimm: add dax_dev sync flag

2019-06-12 Thread Pankaj Gupta
This patch adds 'DAXDEV_SYNC' flag which is set for nd_region doing synchronous flush. This later is used to disable MAP_SYNC functionality for ext4 & xfs filesystem for devices don't support synchronous flush. Signed-off-by: Pankaj Gupta --- drivers/dax/bus.c| 2 +- drivers/dax/sup

[PATCH v13 4/7] dm: enable synchronous dax

2019-06-12 Thread Pankaj Gupta
This patch sets dax device 'DAXDEV_SYNC' flag if all the target devices of device mapper support synchrononous DAX. If device mapper consists of both synchronous and asynchronous dax devices, we don't set 'DAXDEV_SYNC' flag. 'dm_table_supports_dax' is refactored to pass 'iterate_devices_fn' as arg

[PATCH v13 5/7] dax: check synchronous mapping is supported

2019-06-12 Thread Pankaj Gupta
This patch introduces 'daxdev_mapping_supported' helper which checks if 'MAP_SYNC' is supported with filesystem mapping. It also checks if corresponding dax_device is synchronous. Virtio pmem device is asynchronous and does not not support VM_SYNC. Suggested-by: Jan Kara Signed-off-by: Pankaj Gup

[PATCH v13 6/7] ext4: disable map_sync for async flush

2019-06-12 Thread Pankaj Gupta
Dont support 'MAP_SYNC' with non-DAX files and DAX files with asynchronous dax_device. Virtio pmem provides asynchronous host page cache flush mechanism. We don't support 'MAP_SYNC' with virtio pmem and ext4. Signed-off-by: Pankaj Gupta Reviewed-by: Jan Kara --- fs/ext4/file.c | 10 ++

[PATCH v13 7/7] xfs: disable map_sync for async flush

2019-06-12 Thread Pankaj Gupta
Dont support 'MAP_SYNC' with non-DAX files and DAX files with asynchronous dax_device. Virtio pmem provides asynchronous host page cache flush mechanism. We don't support 'MAP_SYNC' with virtio pmem and xfs. Signed-off-by: Pankaj Gupta Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_file.c | 9

Re: [PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts

2019-06-12 Thread Halil Pasic
On Wed, 12 Jun 2019 08:21:27 +0200 Cornelia Huck wrote: > On Wed, 12 Jun 2019 02:32:31 +0200 > Halil Pasic wrote: > > > On Tue, 11 Jun 2019 18:19:44 +0200 > > Cornelia Huck wrote: > > > > > On Tue, 11 Jun 2019 16:27:21 +0200 > > > Halil Pasic wrote: > > > > > IMHO the cleanest thing to

Re: [PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts

2019-06-12 Thread Cornelia Huck
On Wed, 12 Jun 2019 15:33:24 +0200 Halil Pasic wrote: > On Wed, 12 Jun 2019 08:21:27 +0200 > Cornelia Huck wrote: > > > On Wed, 12 Jun 2019 02:32:31 +0200 > > Halil Pasic wrote: > > > > > On Tue, 11 Jun 2019 18:19:44 +0200 > > > Cornelia Huck wrote: > > > > > > > On Tue, 11 Jun 2019 1

Re: [PATCH v13 2/7] virtio-pmem: Add virtio pmem driver

2019-06-12 Thread Cornelia Huck
On Wed, 12 Jun 2019 18:15:22 +0530 Pankaj Gupta wrote: > This patch adds virtio-pmem driver for KVM guest. > > Guest reads the persistent memory range information from > Qemu over VIRTIO and registers it on nvdimm_bus. It also > creates a nd_region object with the persistent memory > range infor

Re: [PATCH v5 2/8] s390/cio: introduce DMA pools to cio

2019-06-12 Thread Cornelia Huck
On Wed, 12 Jun 2019 13:12:30 +0200 Halil Pasic wrote: > To support protected virtualization cio will need to make sure the > memory used for communication with the hypervisor is DMA memory. > > Let us introduce one global pool for cio. > > Our DMA pools are implemented as a gen_pool backed with

Re: [PATCH v5 4/8] s390/airq: use DMA memory for adapter interrupts

2019-06-12 Thread Cornelia Huck
On Wed, 12 Jun 2019 13:12:32 +0200 Halil Pasic wrote: > Protected virtualization guests have to use shared pages for airq > notifier bit vectors, because hypervisor needs to write these bits. > > Let us make sure we allocate DMA memory for the notifier bit vectors by > replacing the kmem_cache w

Re: [PATCH v5 4/8] s390/airq: use DMA memory for adapter interrupts

2019-06-12 Thread Halil Pasic
On Wed, 12 Jun 2019 16:35:01 +0200 Cornelia Huck wrote: > On Wed, 12 Jun 2019 13:12:32 +0200 > Halil Pasic wrote: [..] > > --- a/drivers/s390/cio/css.c > > +++ b/drivers/s390/cio/css.c > > @@ -1184,6 +1184,7 @@ static int __init css_bus_init(void) > > ret = cio_dma_pool_init(); > > if

Re: [Qemu-devel] [PATCH v13 2/7] virtio-pmem: Add virtio pmem driver

2019-06-12 Thread Pankaj Gupta
> > > This patch adds virtio-pmem driver for KVM guest. > > > > Guest reads the persistent memory range information from > > Qemu over VIRTIO and registers it on nvdimm_bus. It also > > creates a nd_region object with the persistent memory > > range information so that existing 'nvdimm/pmem' dr

[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently

2019-06-12 Thread Nadav Amit via Virtualization
To improve TLB shootdown performance, flush the remote and local TLBs concurrently. Introduce flush_tlb_multi() that does so. The current flush_tlb_others() interface is kept, since paravirtual interfaces need to be adapted first before it can be removed. This is left for future work. In such PV en

Re: [PATCH v2 4/4] drm/virtio: Add memory barriers for capset cache.

2019-06-12 Thread Gerd Hoffmann
On Mon, Jun 10, 2019 at 02:18:10PM -0700, davidri...@chromium.org wrote: > From: David Riley > > After data is copied to the cache entry, atomic_set is used indicate > that the data is the entry is valid without appropriate memory barriers. > Similarly the read side was missing the corresponding