Dont support 'MAP_SYNC' with non-DAX files and DAX files
with asynchronous dax_device. Virtio pmem provides
asynchronous host page cache flush mechanism. We don't
support 'MAP_SYNC' with virtio pmem and xfs.
Signed-off-by: Pankaj Gupta
---
fs/xfs/xfs_file.c | 10 ++
1 file changed, 6
Dont support 'MAP_SYNC' with non-DAX files and DAX files
with asynchronous dax_device. Virtio pmem provides
asynchronous host page cache flush mechanism. We don't
support 'MAP_SYNC' with virtio pmem and ext4.
Signed-off-by: Pankaj Gupta
---
fs/ext4/file.c | 11 ++-
1 file changed, 6
This patch introduces 'daxdev_mapping_supported' helper
which checks if 'MAP_SYNC' is supported with filesystem
mapping. It also checks if corresponding dax_device is
synchronous. Virtio pmem device is asynchronous and
does not not support VM_SYNC.
Suggested-by: Jan Kara
Signed-off-by: Pankaj
This patch adds 'DAXDEV_SYNC' flag which is set
for nd_region doing synchronous flush. This later
is used to disable MAP_SYNC functionality for
ext4 & xfs filesystem for devices don't support
synchronous flush.
Signed-off-by: Pankaj Gupta
---
drivers/dax/bus.c| 2 +-
This patch adds virtio-pmem driver for KVM guest.
Guest reads the persistent memory range information from
Qemu over VIRTIO and registers it on nvdimm_bus. It also
creates a nd_region object with the persistent memory
range information so that existing 'nvdimm/pmem' driver
can reserve this into
This patch adds functionality to perform flush from guest
to host over VIRTIO. We are registering a callback based
on 'nd_region' type. virtio_pmem driver requires this special
flush function. For rest of the region types we are registering
existing flush function. Report error returned by host
This patch series has implementation for "virtio pmem".
"virtio pmem" is fake persistent memory(nvdimm) in guest
which allows to bypass the guest page cache. This also
implements a VIRTIO based asynchronous flush mechanism.
Sharing guest kernel driver in this patchset with the
changes
On Fri, 5 Apr 2019 01:16:15 +0200
Halil Pasic wrote:
> Virtio-ccw relies on cio mechanisms for bootstrapping the ccw device.
Well, a ccw device is, by definition, using cio mechanisms ;)
Better say: "As virtio-ccw devices are channel devices, we need to use
the dma area for any communication
On Tue, 9 Apr 2019 12:54:16 +0200
Halil Pasic wrote:
> On Tue, 9 Apr 2019 12:16:47 +0200
> Cornelia Huck wrote:
>
> > On Fri, 5 Apr 2019 01:16:13 +0200
> > Halil Pasic wrote:
> >
> > > On s390 protected virtualization guests also have to use bounce I/O
> > > buffers. That requires some
On Tue, 9 Apr 2019 14:11:14 +0200
Halil Pasic wrote:
> On Tue, 9 Apr 2019 12:44:58 +0200
> Cornelia Huck wrote:
>
> > On Fri, 5 Apr 2019 01:16:14 +0200
> > Halil Pasic wrote:
> > > @@ -886,6 +888,8 @@ static const struct attribute_group
> > > *cssdev_attr_groups[] = {
> > > NULL,
> > >
On Mon, Apr 08, 2019 at 07:45:27PM -0400, Si-Wei Liu wrote:
> When a netdev appears through hot plug then gets enslaved by a failover
> master that is already up and running, the slave will be opened
> right away after getting enslaved. Today there's a race that userspace
> (udev) may fail to
On 4/8/2019 4:45 PM, Si-Wei Liu wrote:
When a netdev appears through hot plug then gets enslaved by a failover
master that is already up and running, the slave will be opened
right away after getting enslaved. Today there's a race that userspace
(udev) may fail to rename the slave if the kernel
On Tue, 9 Apr 2019 15:23:13 +0200
Halil Pasic wrote:
> On Tue, 9 Apr 2019 15:01:20 +0200
> Cornelia Huck wrote:
>
> > On Tue, 9 Apr 2019 13:29:27 +0200
> > Halil Pasic wrote:
> >
> > > On Tue, 9 Apr 2019 11:57:43 +0200
> > > Cornelia Huck wrote:
> > >
> > > > On Fri, 5 Apr 2019
Am 08.04.19 um 13:59 schrieb Thomas Zimmermann:
[SNIP]
> If not for TTM, what would be the alternative? One VMA manager per
> memory region per device?
Since everybody vital seems to be on this mail thread anyway, let's use
it a bit for brain storming what a possible replacement for TTM should
On Tue, Apr 09, 2019 at 12:16:47PM +0800, Jason Wang wrote:
> We set dirty bit through setting up kmaps and access them through
> kernel virtual address, this may result alias in virtually tagged
> caches that require a dcache flush afterwards.
>
> Cc: Christoph Hellwig
> Cc: James Bottomley
>
On Tue, 9 Apr 2019 13:29:27 +0200
Halil Pasic wrote:
> On Tue, 9 Apr 2019 11:57:43 +0200
> Cornelia Huck wrote:
>
> > On Fri, 5 Apr 2019 01:16:12 +0200
> > Halil Pasic wrote:
> >
> > > Currently we have a problem if a virtio-ccw device has
> > > VIRTIO_F_IOMMU_PLATFORM.
> >
> > Can
On Tue, Apr 09, 2019 at 12:10:25PM +0800, Jason Wang wrote:
> We used to accept zero size iova range which will lead a infinite loop
> in translate_desc(). Fixing this by failing the request in this case.
>
> Reported-by: syzbot+d21e6e297322a900c...@syzkaller.appspotmail.com
> Fixes: 6b1e6cc7
> +++ b/arch/s390/include/asm/dma-mapping.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_S390_DMA_MAPPING_H
> +#define _ASM_S390_DMA_MAPPING_H
> +
> +#include
> +
> +static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type
> *bus)
> +{
> +
The __io_virt() macro is not available on all architectures, so cirrus
can't simply pass a pointer to io memory down to the format conversion
helpers. The format conversion helpers must use memcpy_toio() instead.
Add a convert_lines_toio() variant which does just that. Switch the
On Fri, 5 Apr 2019 01:16:14 +0200
Halil Pasic wrote:
> To support protected virtualization cio will need to make sure the
> memory used for communication with the hypervisor is DMA memory.
>
> Let us introduce a DMA pool to cio that will help us in allocating
missing 'and'
> deallocating
On Fri, 5 Apr 2019 01:16:13 +0200
Halil Pasic wrote:
> On s390 protected virtualization guests also have to use bounce I/O
> buffers. That requires some plumbing.
>
> Let us make sure any device using DMA API accordingly is spared from the
> problems that hypervisor attempting I/O to a
On Fri, 5 Apr 2019 01:16:12 +0200
Halil Pasic wrote:
> Currently we have a problem if a virtio-ccw device has
> VIRTIO_F_IOMMU_PLATFORM.
Can you please describe what the actual problem is?
> In future we do want to support DMA API with
> virtio-ccw.
>
> Let us do the plumbing, so the
On 2019/4/8 下午8:33, Cornelia Huck wrote:
vring_create_virtqueue() allows the caller to specify via the
may_reduce_num parameter whether the vring code is allowed to
allocate a smaller ring than specified.
However, the split ring allocation code tries to allocate a
smaller ring on allocation
Hi,
> > Should we add something like DRM_PRIME_CAP_SAME_DEVICE?
>
> Yeah I expect we need some sort of same device only capability, so
> that dri3 userspace can work.
>
> If we just fail importing in these cases what happens? userspace just
> gets confused, I know we used to print a backtrace
On 2019/4/8 下午5:44, Stefan Hajnoczi wrote:
On Mon, Apr 08, 2019 at 02:43:28PM +0800, Jason Wang wrote:
Another thing that may help is to implement sendpage(), which will greatly
improve the performance.
I can't find documentation for ->sendpage(). Is the idea that you get a
struct page for
Hi,
> > The qemu stdvga (bochs driver) has 16 MB vram by default and can be
> > configured to have up to 256 MB. Plenty of room even for multiple 4k
> > framebuffers if needed. So for the bochs driver all the ttm bo
> > migration logic is not needed, it could just store everything in vram.
>
On Tue, 9 Apr 2019 at 17:12, kra...@redhat.com wrote:
>
> Hi,
>
> > If not for TTM, what would be the alternative? One VMA manager per
> > memory region per device?
>
> Depends pretty much on the device.
>
> The cirrus is a display device with only 4 MB of vram. You can't fit
> much in there.
Hi,
> If not for TTM, what would be the alternative? One VMA manager per
> memory region per device?
Depends pretty much on the device.
The cirrus is a display device with only 4 MB of vram. You can't fit
much in there. A single 1024x768 @ 24bpp framebuffer needs more 50%
of the video
Hi,
On 08-04-19 11:21, Thomas Zimmermann wrote:
Signed-off-by: Thomas Zimmermann
Patch looks good to me (although perhaps it needs a commit msg):
Reviewed-by: Hans de Goede
Regards,
Hans
---
drivers/gpu/drm/vboxvideo/Kconfig| 1 +
drivers/gpu/drm/vboxvideo/vbox_drv.h | 6
Hi,
On 08-04-19 11:21, Thomas Zimmermann wrote:
This patch replaces |struct vbox_bo| and its helpers with the generic
implementation of |struct drm_gem_ttm_object|. The only change in
semantics is that _bo_driver.verify_access() now does the actual
verification.
Signed-off-by: Thomas
On Tue, Apr 09, 2019 at 02:01:33PM +1000, Dave Airlie wrote:
> On Sat, 12 Jan 2019 at 07:13, Dave Airlie wrote:
> >
> > On Thu, 10 Jan 2019 at 18:17, Gerd Hoffmann wrote:
> > >
> > > Also set prime_handle_to_fd and prime_fd_to_handle to NULL,
> > > so drm will not advertive
31 matches
Mail list logo