[PATCH v2] drm/qxl: don't take vga ports on rev5+

2020-08-07 Thread Gerd Hoffmann
qemu 5.0 introduces a new qxl hardware revision 5.  Unlike revision 4
(and below) the device doesn't switch back into vga compatibility mode
when someone touches the vga ports.  So we don't have to reserve the
vga ports any more to avoid that happening.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
index 13872b882775..6e7f16f4cec7 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.c
+++ b/drivers/gpu/drm/qxl/qxl_drv.c
@@ -96,7 +96,7 @@ qxl_pci_probe(struct pci_dev *pdev, const struct 
pci_device_id *ent)
if (ret)
goto disable_pci;
 
-   if (is_vga(pdev)) {
+   if (is_vga(pdev) && pdev->revision < 5) {
ret = vga_get_interruptible(pdev, VGA_RSRC_LEGACY_IO);
if (ret) {
DRM_ERROR("can't get legacy vga ioports\n");
@@ -127,7 +127,7 @@ qxl_pci_probe(struct pci_dev *pdev, const struct 
pci_device_id *ent)
 unload:
qxl_device_fini(qdev);
 put_vga:
-   if (is_vga(pdev))
+   if (is_vga(pdev) && pdev->revision < 5)
vga_put(pdev, VGA_RSRC_LEGACY_IO);
 disable_pci:
pci_disable_device(pdev);
@@ -155,7 +155,7 @@ qxl_pci_remove(struct pci_dev *pdev)
 
drm_dev_unregister(dev);
drm_atomic_helper_shutdown(dev);
-   if (is_vga(pdev))
+   if (is_vga(pdev) && pdev->revision < 5)
vga_put(pdev, VGA_RSRC_LEGACY_IO);
 }
 
-- 
2.18.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/virtio: fix unblank

2020-08-07 Thread Gerd Hoffmann
When going through a disable/enable cycle without changing the
framebuffer the optimization added by commit 3954ff10e06e ("drm/virtio:
skip set_scanout if framebuffer didn't change") causes the screen stay
blank.  Add a bool to force an update to fix that.

Cc: 1882...@bugs.launchpad.net
Fixes: 3954ff10e06e ("drm/virtio: skip set_scanout if framebuffer didn't 
change")
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
 drivers/gpu/drm/virtio/virtgpu_display.c | 1 +
 drivers/gpu/drm/virtio/virtgpu_plane.c   | 4 +++-
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 9ff9f4ac0522..7b0c319f23c9 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -138,6 +138,7 @@ struct virtio_gpu_output {
int cur_x;
int cur_y;
bool enabled;
+   bool need_update;
 };
 #define drm_crtc_to_virtio_gpu_output(x) \
container_of(x, struct virtio_gpu_output, crtc)
diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c 
b/drivers/gpu/drm/virtio/virtgpu_display.c
index cc7fd957a307..378be5956b30 100644
--- a/drivers/gpu/drm/virtio/virtgpu_display.c
+++ b/drivers/gpu/drm/virtio/virtgpu_display.c
@@ -100,6 +100,7 @@ static void virtio_gpu_crtc_atomic_enable(struct drm_crtc 
*crtc,
struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc);
 
output->enabled = true;
+   output->need_update = true;
 }
 
 static void virtio_gpu_crtc_atomic_disable(struct drm_crtc *crtc,
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 52d24179bcec..5948031a9ce8 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -163,7 +163,8 @@ static void virtio_gpu_primary_plane_update(struct 
drm_plane *plane,
plane->state->src_w != old_state->src_w ||
plane->state->src_h != old_state->src_h ||
plane->state->src_x != old_state->src_x ||
-   plane->state->src_y != old_state->src_y) {
+   plane->state->src_y != old_state->src_y ||
+   output->need_update) {
DRM_DEBUG("handle 0x%x, crtc %dx%d+%d+%d, src %dx%d+%d+%d\n",
  bo->hw_res_handle,
  plane->state->crtc_w, plane->state->crtc_h,
@@ -178,6 +179,7 @@ static void virtio_gpu_primary_plane_update(struct 
drm_plane *plane,
   plane->state->src_h >> 16,
   plane->state->src_x >> 16,
   plane->state->src_y >> 16);
+   output->need_update = false;
}
 
virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
-- 
2.18.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 32/59] drm/qxl/ttm: use new takedown path

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:56:05PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> Signed-off-by: Dave Airlie 

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 40/59] drm/qxl/ttm: use wrapper to access memory manager

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:56:13PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> Signed-off-by: Dave Airlie 

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 18/59] drm/vram_helper: use new ttm manager init function

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:55:51PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> Signed-off-by: Dave Airlie 

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 17/59] drm/qxl/ttm: use new init path for manager

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:55:50PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> Signed-off-by: Dave Airlie 

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 12/59] drm/vram-helper: call the ttm manager debug function

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:55:45PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> This code was assuming there was a drm_mm here, don't do
> that call the correct API.
> 
> v2: use the new exported interface.

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 11/59] drm/qxl/ttm: call ttm manager debug (v2)

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:55:44PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> v2: use the new exported interface.
> This code was poking inside a struct and assuming it was a drm_mm
> at the start. Call the proper API.

[ also adds some debug info about the ttm manager to the file ]

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 04/59] qxl/ttm: drop the unusued no wait flag to reserve function

2020-08-04 Thread Gerd Hoffmann
On Tue, Aug 04, 2020 at 12:55:37PM +1000, Dave Airlie wrote:
> From: Dave Airlie 
> 
> Signed-off-by: Dave Airlie 

Reviewed-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/virtio: Remove open-coded commit-tail function

2020-07-09 Thread Gerd Hoffmann
On Thu, Jul 09, 2020 at 02:33:39PM +0200, Daniel Vetter wrote:
> Exactly matches the one in the helpers.
> 
> This avoids me having to roll out dma-fence critical section
> annotations to this copy.
> 
> Signed-off-by: Daniel Vetter 
> Cc: David Airlie 
> Cc: Gerd Hoffmann 
> Cc: virtualizat...@lists.linux-foundation.org

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Panic booting qemu-system-sparc64 with bochs_drm

2020-07-07 Thread Gerd Hoffmann
> Thanks Gerd - I've just tested the diff below with memcpy_toio() and that 
> works too:
> 
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 5609e164805f..4d05b0ab1592 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -399,7 +399,7 @@ static void drm_fb_helper_dirty_blit_real(struct 
> drm_fb_helper
> *fb_helper,
> unsigned int y;
> 
> for (y = clip->y1; y < clip->y2; y++) {
> -   memcpy(dst, src, len);
> +   memcpy_toio(dst, src, len);
> src += fb->pitches[0];
> dst += fb->pitches[0];
> }
> 
> Presumably there is some existing mechanism that ensures SPARC will always 
> choose a
> shadow framebuffer?

bochs-drm always runs with a shadow framebuffer (that allows to swap
the real framebuffer into and out of vram as needed).  With other
drivers this is in the hands of the driver.  It might not be needed,
virtio-gpu for example uses normal ram as framebuffer storage.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Panic booting qemu-system-sparc64 with bochs_drm

2020-07-07 Thread Gerd Hoffmann
> Yes, that's correct - I can confirm that the simplified diff below works:
> 
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 5609e164805f..83af05fac604 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -399,7 +399,7 @@ static void drm_fb_helper_dirty_blit_real(struct 
> drm_fb_helper
> *fb_helper,
> unsigned int y;
> 
> for (y = clip->y1; y < clip->y2; y++) {
> -   memcpy(dst, src, len);
> +   fb_memcpy_tofb(dst, src, len);

fb_memcpy_tofb is #defined to sbus_memcpy_toio @ sparc which looks
wrong to me given that this is a pci not a sbus device.  sparc also has
memcpy_toio which looks better to me.

There are blit helpers in drm_format_helper.c which already use
memcpy_toio(), I guess we should do the same here.  Not fully sure we
can use memcpy_toio() unconditionally here.  Given that a shadow
framebuffer makes sense only in case the real framebuffer is not in
normal ram we probably can.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 1/2] drm/hyperv: Add DRM driver for hyperv synthetic video device

2020-06-23 Thread Gerd Hoffmann
  Hi,

> > > > +   msg->vram.user_ctx = msg->vram.vram_gpa = vram_pp;
> > > > +   msg->vram.is_vram_gpa_specified = 1;
> > > > +   synthvid_send(hdev, msg);
> > > 
> > > That suggests it is possible to define multiple framebuffers in vram,
> > > then pageflip by setting vram.vram_gpa.  If that is the case I'm
> > > wondering whenever vram helpers are a better fit maybe?  You don't
> > > have
> > > to blit each and every display update then.
> > 
> > Thanks for the review. Unfortunately only the first vmbus message take
> > effect and subsequent calls are ignored. I originally implemented using
> > vram helpers but I figured out calling this vmbus message again won't
> > change the vram location.

/me notices there also is user_ctx.  What is this?

> I think that needs a very big comment. Maybe even enforce that with a
> "vram_gpa_set" boolean in the device structure, and complain if we try to
> do that twice.
> 
> Also I guess congrats to microsoft for defining things like that :-/

I would be kind of surprised if the virtual device doesn't support
pageflips.  Maybe setting vram_gpa just isn't the correct way to do
it.  Is there a specification available?

There are a number of microsoft folks in Cc:  Anyone willing to comment?

thanks,
  Gerd

PS: And, yes, in case pageflips really don't work going with shmem
helpers + blits is reasonable.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 1/2] drm/hyperv: Add DRM driver for hyperv synthetic video device

2020-06-22 Thread Gerd Hoffmann
  Hi,

> +/* Should be done only once during init and resume */
> +static int synthvid_update_vram_location(struct hv_device *hdev,
> +   phys_addr_t vram_pp)
> +{
> + struct hyperv_device *hv = hv_get_drvdata(hdev);
> + struct synthvid_msg *msg = (struct synthvid_msg *)hv->init_buf;
> + unsigned long t;
> + int ret = 0;
> +
> + memset(msg, 0, sizeof(struct synthvid_msg));
> + msg->vid_hdr.type = SYNTHVID_VRAM_LOCATION;
> + msg->vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
> + sizeof(struct synthvid_vram_location);
> + msg->vram.user_ctx = msg->vram.vram_gpa = vram_pp;
> + msg->vram.is_vram_gpa_specified = 1;
> + synthvid_send(hdev, msg);

That suggests it is possible to define multiple framebuffers in vram,
then pageflip by setting vram.vram_gpa.  If that is the case I'm
wondering whenever vram helpers are a better fit maybe?  You don't have
to blit each and every display update then.

FYI: cirrus goes the blit route because (a) the amount of vram is very
small so trying to store multiple framebuffers there is out of question,
and (b) cirrus converts formats on the fly to hide some hardware
oddities.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/shmem: add support for per object dma api operations

2020-06-15 Thread Gerd Hoffmann
On Fri, Jun 12, 2020 at 11:54:54AM -0700, Gurchetan Singh wrote:

> Plus, I just realized the virtio dma ops and ops used by drm shmem are
> different, so virtio would have to unconditionally have to skip the
> shmem path.  Perhaps the best policy is to revert d323bb44e4d2?

Reverting d323bb44e4d2 should work given that virtio-gpu doesn't support
dma-buf imports, but when doing so please add a comment to the code
explaining why virtio-gpu handles this different than everybody else.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/virtio: fix unblank

2020-06-12 Thread Gerd Hoffmann
When going through a disable/enable cycle without changing the framebuffer
the optimization added by commit 3954ff10e06e causes the screen stay
blank.  Add a bool to force an update to fix that.

Cc: 1882...@bugs.launchpad.net
Fixes: 3954ff10e06e ("drm/virtio: skip set_scanout if framebuffer didn't 
change")
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
 drivers/gpu/drm/virtio/virtgpu_display.c | 1 +
 drivers/gpu/drm/virtio/virtgpu_plane.c   | 4 +++-
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 7879ff58236f..6d5410d5dd84 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -138,6 +138,7 @@ struct virtio_gpu_output {
int cur_x;
int cur_y;
bool enabled;
+   bool need_update;
 };
 #define drm_crtc_to_virtio_gpu_output(x) \
container_of(x, struct virtio_gpu_output, crtc)
diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c 
b/drivers/gpu/drm/virtio/virtgpu_display.c
index 2b7e6ae65546..44e9c7b874f5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_display.c
+++ b/drivers/gpu/drm/virtio/virtgpu_display.c
@@ -99,6 +99,7 @@ static void virtio_gpu_crtc_atomic_enable(struct drm_crtc 
*crtc,
struct virtio_gpu_output *output = drm_crtc_to_virtio_gpu_output(crtc);
 
output->enabled = true;
+   output->need_update = true;
 }
 
 static void virtio_gpu_crtc_atomic_disable(struct drm_crtc *crtc,
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 52d24179bcec..5948031a9ce8 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -163,7 +163,8 @@ static void virtio_gpu_primary_plane_update(struct 
drm_plane *plane,
plane->state->src_w != old_state->src_w ||
plane->state->src_h != old_state->src_h ||
plane->state->src_x != old_state->src_x ||
-   plane->state->src_y != old_state->src_y) {
+   plane->state->src_y != old_state->src_y ||
+   output->need_update) {
DRM_DEBUG("handle 0x%x, crtc %dx%d+%d+%d, src %dx%d+%d+%d\n",
  bo->hw_res_handle,
  plane->state->crtc_w, plane->state->crtc_h,
@@ -178,6 +179,7 @@ static void virtio_gpu_primary_plane_update(struct 
drm_plane *plane,
   plane->state->src_h >> 16,
   plane->state->src_x >> 16,
   plane->state->src_y >> 16);
+   output->need_update = false;
}
 
virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
-- 
2.18.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/shmem: add support for per object dma api operations

2020-06-12 Thread Gerd Hoffmann
On Fri, Jun 12, 2020 at 11:47:55AM +0200, Thomas Zimmermann wrote:
> Hi
> 
> Am 12.06.20 um 03:36 schrieb Gurchetan Singh:
> > This is useful for the next patch.  Also, should we only unmap the
> > amount entries that we mapped with the dma-api?
> 
> It looks like you're moving virtio code into shmem.

Well, not really.

virtio has -- for historical reasons -- the oddity that it may or may
not need to dma_map resources, depending on device configuration.
Initially virtio went with "it's just a vm, lets simply operate on
physical ram addresses".  That shortcut turned out to be a bad idea
later on, especially with the arrival of iommu emulation support in
qemu.  But we couldn't just scratch it for backward compatibility
reasons.  See virtio_has_iommu_quirk().

This just allows to enable/disable dma_map, I guess to fix some fallout
from recent shmem helper changes (Gurchetan, that kind of stuff should
be mentioned in cover letter and commit messages).

I'm not sure virtio actually needs that patch though.  I *think* doing
the dma_map+dma_unmap unconditionally, but then ignore the result in
case we don't need it should work.  And it shouldn't be a horrible
performance hit either, in case we don't have a (virtual) iommu in the
VM dma mapping is essentially a nop ...

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4 0/3] Support virtio cross-device resources

2020-05-28 Thread Gerd Hoffmann
On Tue, May 26, 2020 at 07:58:08PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1]. It will be used to import virtio resources into
> the virtio-video driver currently under discussion [2]. The patch
> under consideration to add support in the virtio-video driver is [3].
> It uses the APIs from v3 of this series, but the changes to update it
> are relatively minor.
> 
> This patchset adds a new flavor of dma-bufs that supports querying the
> underlying virtio object UUID, as well as adding support for exporting
> resources from virtgpu.
> 
> [1] https://markmail.org/thread/2ypjt5cfeu3m6lxu
> [2] https://markmail.org/thread/p5d3k566srtdtute
> [3] https://markmail.org/thread/j4xlqaaim266qpks
> 
> v3 -> v4 changes:
>  - Replace dma-buf hooks with virtio dma-buf from v1.
>  - Remove virtio_attach callback, as the work that had been done
>in that callback is now done on dma-buf export. The documented
>requirement that get_uuid only be called on attached virtio
>dma-bufs is also removed.
>  - Rebase and add call to virtio_gpu_notify for ASSIGN_UUID.
> 
> David Stevens (3):
>   virtio: add dma-buf support for exported objects
>   virtio-gpu: add VIRTIO_GPU_F_RESOURCE_UUID feature
>   drm/virtio: Support virtgpu exported resources

Looks all sane to me.  mst, have you looked at the virtio core changes?
How we are going to merge this?  If you ack I can merge via
drm-misc-next.  Merging through virtio queue would be fine too.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Potential Memory Leak Bugs in drivers/gpu/drm/virtio/virtgpu_vq.c (Linux 5.6).

2020-05-28 Thread Gerd Hoffmann
On Thu, May 28, 2020 at 03:57:05PM +0800, Dongyang Zhan wrote:
> Hi,
> My name is Dongyang Zhan, I am a security researcher.
> Currently, I found two possible memory bugs in
> drivers/gpu/drm/virtio/virtgpu_vq.c (Linux 5.6).
> I hope you can help me to confirm them. Thank you.

Sorry.  Not confirmed.  You should do a better job verifying your
claims before bugging people.

> The first one is resp_buf will not be release in
> virtio_gpu_cmd_get_display_info() with the condition
> (resp_size <= MAX_INLINE_RESP_SIZE) in virtio_gpu_alloc_cmd_resp().

In that code path resp_size equals sizeof(struct
virtio_gpu_resp_display_info) which is larger than MAX_INLINE_RESP_SIZE
so the condition is never true and no leak happens.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/shmem: Use cached mappings by default

2020-05-18 Thread Gerd Hoffmann
On Mon, May 18, 2020 at 10:50:15AM +0200, Thomas Zimmermann wrote:
> Hi Gerd
> 
> Am 18.05.20 um 10:23 schrieb Gerd Hoffmann:
> >>> $ git grep drm_gem_shmem_mmap
> >>>
> >>> We also need correct access from userspace, otherwise the gpu is going to
> >>> be sad.
> >>
> >> I've been thinking about this, and I think it means that we can never
> >> have cached mappings anywhere. Even if shmem supports it internally for
> >> most drivers, as soon as the page are exported, the importer could
> >> expect uncached memory.
> > 
> > The importer should not expect anything but call dma-buf ops so the
> > exporter has a chance to handle this correctly.
> 
> I have the following case in mind: Suppose the exporter maps cached
> pages and the importer expects uncached pages for DMA. There is
> map_dma_buf/unmap_dma_buf, which can implement a cache flush for the
> cached pages. Is it guaranteed that the importer calls this around each
> DMA operation?

I think the importer is supposed to do that, but I wouldn't surprised if
there are cases in tree where this isn't implemented correctly ...

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/shmem: Use cached mappings by default

2020-05-18 Thread Gerd Hoffmann
> > $ git grep drm_gem_shmem_mmap
> > 
> > We also need correct access from userspace, otherwise the gpu is going to
> > be sad.
> 
> I've been thinking about this, and I think it means that we can never
> have cached mappings anywhere. Even if shmem supports it internally for
> most drivers, as soon as the page are exported, the importer could
> expect uncached memory.

The importer should not expect anything but call dma-buf ops so the
exporter has a chance to handle this correctly.

(Yes, we don't get this right everywhere, some drivers take the dma-bufs
list of pages and do their own thing ...).

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 28/38] drm/qxl: remove _unlocked suffix in drm_object_put_unlocked

2020-05-15 Thread Gerd Hoffmann
On Fri, May 15, 2020 at 10:51:08AM +0100, Emil Velikov wrote:
> From: Emil Velikov 
> 
> Spelling out _unlocked for each and every driver is a annoying.
> Especially if we consider how many drivers, do not know (or need to)
> about the horror stories involving struct_mutex.
> 
> Just drop the suffix. It makes the API cleaner.

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 35/38] drm/virtio: remove _unlocked suffix in drm_object_put_unlocked

2020-05-15 Thread Gerd Hoffmann
On Fri, May 15, 2020 at 10:51:15AM +0100, Emil Velikov wrote:
> From: Emil Velikov 
> 
> Spelling out _unlocked for each and every driver is a annoying.
> Especially if we consider how many drivers, do not know (or need to)
> about the horror stories involving struct_mutex.
> 
> Just drop the suffix. It makes the API cleaner.

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 30/38] dmabuf: fix common struct sg_table related issues

2020-05-15 Thread Gerd Hoffmann
> diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
> index acb26c6..89e293b 100644
> --- a/drivers/dma-buf/udmabuf.c
> +++ b/drivers/dma-buf/udmabuf.c
> @@ -63,10 +63,9 @@ static struct sg_table *get_sg_table(struct device *dev, 
> struct dma_buf *buf,
>   GFP_KERNEL);
>   if (ret < 0)
>   goto err;
> - if (!dma_map_sg(dev, sg->sgl, sg->nents, direction)) {
> - ret = -EINVAL;
> + ret = dma_map_sgtable(dev, sg, direction, 0);
> + if (ret < 0)
>   goto err;
> - }
>   return sg;
>  
>  err:
> @@ -78,7 +77,7 @@ static struct sg_table *get_sg_table(struct device *dev, 
> struct dma_buf *buf,
>  static void put_sg_table(struct device *dev, struct sg_table *sg,
>enum dma_data_direction direction)
>  {
> - dma_unmap_sg(dev, sg->sgl, sg->nents, direction);
> + dma_unmap_sgtable(dev, sg, direction, 0);
>   sg_free_table(sg);
>   kfree(sg);
>  }

Easy straightforward conversation.

Acked-by: Gerd Hoffmann 

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 25/38] drm: virtio: fix common struct sg_table related issues

2020-05-15 Thread Gerd Hoffmann
On Wed, May 13, 2020 at 03:32:32PM +0200, Marek Szyprowski wrote:
> The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
> returns the number of the created entries in the DMA address space.
> However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
> dma_unmap_sg must be called with the original number of the entries
> passed to the dma_map_sg().
> 
> struct sg_table is a common structure used for describing a non-contiguous
> memory buffer, used commonly in the DRM and graphics subsystems. It
> consists of a scatterlist with memory pages and DMA addresses (sgl entry),
> as well as the number of scatterlist entries: CPU pages (orig_nents entry)
> and DMA mapped pages (nents entry).
> 
> It turned out that it was a common mistake to misuse nents and orig_nents
> entries, calling DMA-mapping functions with a wrong number of entries or
> ignoring the number of mapped entries returned by the dma_map_sg()
> function.
> 
> To avoid such issues, lets use a common dma-mapping wrappers operating
> directly on the struct sg_table objects and use scatterlist page
> iterators where possible. This, almost always, hides references to the
> nents and orig_nents entries, making the code robust, easier to follow
> and copy/paste safe.

Looks all sane.

Acked-by: Gerd Hoffmann 

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/4] dma-buf: add support for virtio exported objects

2020-05-14 Thread Gerd Hoffmann
  Hi,

> - for the runtime upcasting the usual approach is to check the ->ops
> pointer. Which means that would need to be the same for all virtio
> dma_bufs, which might get a bit awkward. But I'd really prefer we not
> add allocator specific stuff like this to dma-buf.

This is exactly the problem, it gets messy quickly, also when it comes
to using the drm_prime.c helpers ...

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 01/17] drm/mgag200: Remove HW cursor

2020-04-30 Thread Gerd Hoffmann
On Wed, Apr 29, 2020 at 07:51:07PM +0200, Sam Ravnborg wrote:
> On Wed, Apr 29, 2020 at 04:32:22PM +0200, Thomas Zimmermann wrote:
> > The HW cursor of Matrox G200 cards only supports a 16-color palette
> > format. Univeral planes require at least ARGB or a similar component-
> > based format. Converting a cursor image from ARGB to 16 colors does not
> > produce pleasent-looking results in general, so remove the HW cursor.
> 
> What impact does this have in useability?
> Does the cursor behaviour stay the same or?

xorg/wayland switch to software cursor then.  Shouldn't be a big
difference.  If you wanna check how userspace behaves without g200
hardware you can try qemu.  stdvga (bochs-drm.ko) has no hardware
cursor, virtio-vga (virtio-gpu.ko) has a hardware cursor.

> The patch looks fine, but it seems a bit gross ditching curcor support.
> But maybe it is the right choice, I dunno.

cirrus driver does the same.  The hardware has cursor support, but not
rgba, and it is not used.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/qxl: qxl_release use after free

2020-04-29 Thread Gerd Hoffmann
On Wed, Apr 29, 2020 at 12:01:24PM +0300, Vasily Averin wrote:
> qxl_release should not be accesses after qxl_push_*_ring_release() calls:
> userspace driver can process submitted command quickly, move qxl_release
> into release_ring, generate interrupt and trigger garbage collector.
> 
> It can lead to crashes in qxl driver or trigger memory corruption
> in some kmalloc-192 slab object
> 
> Gerd Hoffmann proposes to swap the qxl_release_fence_buffer_objects() +
> qxl_push_{cursor,command}_ring_release() calls to close that race window.
> 
> cc: sta...@vger.kernel.org
> Fixes: f64122c1f6ad ("drm: add new QXL driver. (v1.4)")
> Signed-off-by: Vasily Averin 

Pushed to drm-misc-fixes.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/1] drm/qxl: add mutex_lock/mutex_unlock to ensure the order in which resources are rele

2020-04-29 Thread Gerd Hoffmann
  Hi,

> > The only way I see for this to happen is that the guest is preempted
> > between qxl_push_{cursor,command}_ring_release() and
> > qxl_release_fence_buffer_objects() calls.  The host can complete the qxl
> > command then, signal the guest, and the IRQ handler calls
> > qxl_release_free_list() before qxl_release_fence_buffer_objects() runs.
> 
> We think the same: qxl_release was freed by garbage collector before
> original thread had called qxl_release_fence_buffer_objects().

Ok, nice, I think we can consider the issue being analyzed then ;)

> > Looking through the code I think it should be safe to simply swap the
> > qxl_release_fence_buffer_objects() +
> > qxl_push_{cursor,command}_ring_release() calls to close that race
> > window.  Can you try that and see if it fixes the bug for you?
> 
> I'm going to prepare and test such patch but I have one question here:
> qxl_push_*_ring_release can be called with  interruptible=true and fail
> How to correctly handle this case? Is the hunk below correct from your POV?

Oh, right, the error code path will be quite different, checking ...

> --- a/drivers/gpu/drm/qxl/qxl_ioctl.c
> +++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
> @@ -261,12 +261,8 @@ static int qxl_process_single_command(struct qxl_device 
> *qdev,
> apply_surf_reloc(qdev, _info[i]);
> }
>  
> +   qxl_release_fence_buffer_objects(release);
> ret = qxl_push_command_ring_release(qdev, release, cmd->type, true);
> -   if (ret)
> -   qxl_release_backoff_reserve_list(release);   
> -   else
> -   qxl_release_fence_buffer_objects(release);
> -
>  out_free_bos:
>  out_free_release:
if (ret)
qxl_release_free(qdev, release);

[ code context added ]

qxl_release_free() checks whenever a release is fenced and signals the
fence in case it is so it doesn't wait for the signal forever.  So, yes,
I think qxl_release_free() should cleanup the release properly in any
case and the patch chunk should be correct.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/vram-helper: Alternate between bottom-up and top-down placement

2020-04-29 Thread Gerd Hoffmann
  Hi,

> It's not that easy. Current cursors n ast are statically allocated. As
> soon as you add dynamic cursors into the mix, you'd get OOMs.

Well, with the split you can simply allocate dynamic cursors with
top-bottom to keep them out of the way.  It's not 100% perfect, the area
where the cursors are allocated from has a bit less free vram, so the
split effectively isn't 50/50 but more like 49/51.  But hey, still alot
better than what we have today.

With two static cursors you can allocate one from TT_VRAM and one from
TT_PRIV so the remaining free vram is the same in both regions.

> If the framebuffer BO goes into VRAM and the cursor BO goes into PRIV,
> pinning the next framebuffer BO during a pageflip would send it to
> VRAM.

Oh, seems my idea outline was a bit to short.  The plan is *not* to
alternate between VRAM and PRIV on allocations.  The plan is to add both
PRIV and VRAM to the placement array when pinning the framebuffer.

ttm can simply place the framebuffers as it pleases and where it wants.
Due to the split it can't do a big blocking allocation in the middle
of vram though.

If you are going to pageflip you should have one framebuffer already
pinned (the one currently displayed).  If that happens to live TT_VRAM
ttm should be able to make room in TT_PRIV to pin the new framebuffer
you want pageflip to, and visa-versa.

ttm will only evict unpinned BOs if needed, when running with smaller
framebuffers ttm will happily keep more than two framebuffers in vram.
All fully automatic without vram helpers having to pay much attention
to the allocation strategy.

> I'm preparing v2 of this patch set. It'll get static cursors out of the
> way and make the feature opt-in.

I think with the split model there is no need to make that optional.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/qxl: lost qxl_bo_kunmap_atomic_page in qxl_image_init_helper()

2020-04-28 Thread Gerd Hoffmann
On Mon, Apr 27, 2020 at 10:55:27AM +0300, Vasily Averin wrote:
> Signed-off-by: Vasily Averin 
> ---
>  drivers/gpu/drm/qxl/qxl_image.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/gpu/drm/qxl/qxl_image.c b/drivers/gpu/drm/qxl/qxl_image.c
> index 43688ecdd8a0..7270da62fc29 100644
> --- a/drivers/gpu/drm/qxl/qxl_image.c
> +++ b/drivers/gpu/drm/qxl/qxl_image.c
> @@ -212,6 +212,7 @@ qxl_image_init_helper(struct qxl_device *qdev,
>   break;
>   default:
>   DRM_ERROR("unsupported image bit depth\n");
> + qxl_bo_kunmap_atomic_page(qdev, image_bo, ptr);
>   return -EINVAL; /* TODO: cleanup */

I guess you can ditch the TODO comment now, it's done ;)

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/qxl: qxl_release leak in qxl_hw_surface_alloc()

2020-04-28 Thread Gerd Hoffmann
On Mon, Apr 27, 2020 at 08:32:51AM +0300, Vasily Averin wrote:
> Cc: sta...@vger.kernel.org
> Fixes: 8002db6336dd ("qxl: convert qxl driver to proper use for reservations")
> Signed-off-by: Vasily Averin 

Both patches pushed to drm-misc-fixes.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/virtio: only destroy created contexts

2020-04-28 Thread Gerd Hoffmann
On Wed, Apr 08, 2020 at 04:29:38PM -0700, Gurchetan Singh wrote:
> This can happen if userspace doesn't issue any 3D ioctls before
> closing the DRM fd.
> 
> Fixes: 72b48ae800da ("drm/virtio: enqueue virtio_gpu_create_context
> after the first 3D ioctl")
> Signed-off-by: Gurchetan Singh 

Pushed to drm-misc-fixes.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/vram-helper: Alternate between bottom-up and top-down placement

2020-04-24 Thread Gerd Hoffmann
  Hi,

> At some point one has to choose to switch to top-down, and then back
> again at one of the next BOs. So the current patch effectively splits
> vram into a lower half and an upper half and puts BOs in alternating halves.

Hmm, so maybe just make the split explicit instead of playing tricks
with top_bottom?  Register the lower vram half as TT_VRAM, register the
upper half as TT_PRIV.  That'll make sure you never have a big
allocation in the middle blocking things.  Or maybe add a allocation
strategy flag to ttm which effectively does the same.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/vram-helper: Alternate between bottom-up and top-down placement

2020-04-23 Thread Gerd Hoffmann
> > I don't think it is that simple.
> > 
> > First:  How will that interact with cursor bo allocations?  IIRC the
> > strategy for them is to allocate top-down, for similar reasons (avoid
> > small cursor bo allocs fragment vram memory).
> 
> In ast, 2 cursor BOs are allocated during driver initialization and kept
> permanently at the vram's top end. I don't know about other drivers.

One-time allocation at init time shouldn't be a problem.

> But cursor BOs are small, so they don't make much of a difference. What
> is needed is space for 2 primary framebuffers during pageflips, with one
> of them pinned. The other framebuffer can be located anywhere.

The problem isn't the size.  The problem is dynamically allocated cursor
BOs can also fragment vram, especially if top-bottom allocation is also
used for large framebuffers so cursor BOs can end up somewhere in the
middle of vram.

> > Second:  I think ttm will move bo's from vram to system only on memory
> > pressure.  So you can still end up with fragmented memory.  To make the
> > scheme with one fb @ top and one @ bottom work reliable you have to be
> > more aggressive on pushing out framebuffers.
> 
> I'm the process of converting mgag200 to atomic modesetting. The given
> example is what I observed. I'm not claiming that the placement scheme
> is perfect, but it is required to get mgag200 working with atomic
> modesetting's pageflip logic. So we're solving a real problem here.

I don't doubt this is a real problem.

> The bug comes from Weston's allocation strategy. Looking at the debug
> output:
> 
> >>   0x-0x057f: 1407: free
> 
> This was fbdev's framebuffer with 1600x900@32bpp
> 
> >>   0x057f-0x0b5b: 1500: used
> 
> This is Weston's framebuffer also with 1600x900@32bpp. But Weston
> allocates an additional, unused 60 scanlines. That is to render with
> tiles of 64x64px, I suppose. fbdev doesn't do that, hence Weston's
> second framebuffer doesn't fit into the free location of the fbdev
> framebuffer.

Sure.  Assume there is just enough vram to fit in fbdev and two weston
framebuffers.  fbdev is allocated from bottom, first weston fb from top,
second weston fb from bottom again.  fbdev is not pushed out (no memory
pressure yet) so the second weston fb ends up in the middle of vram
fragmenting things.  And now you are again in the situation where you
might have enough free vram for an allocation but can't use it due to
fragmention (probably harder to trigger in practice though).

That's why I would suggest to explicitly move out unpinned framebuffers
(aka fbdev) before pinning a new one (second weston fb) instead of
depending on ttm moving things out on OOM, to make sure you never
allocate something in the middle of vram.

> > Third:  I'd suggest make topdown allocations depending on current state
> > instead of simply alternating, i.e. if there is a pinned framebuffer @
> > offset 0, then go for top-down.
> 
> That's what the current patch does. If the last pin was at the bottom,
> the next goes to the top. And then the other way around. Without
> alternating between both end of vram, the problem would occur again when
> fragmentation happens near the top end.

I'd feel better when checking the state of my current pins to figure
whenever I should alloc top-bottom or not, for robustness reasons.

> Looking again at the vram helpers, this functionality could be
> implemented in drm_gem_vram_plane_helper_prepare_fb(). Drivers with
> other placement strategies could implement their own helper for prepare_fb.

vram helpers could also simply offer two prepare_fb variants.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/vram-helper: Alternate between bottom-up and top-down placement

2020-04-23 Thread Gerd Hoffmann
On Wed, Apr 22, 2020 at 04:40:55PM +0200, Thomas Zimmermann wrote:
> With limited VRAM available, fragmentation can lead to OOM errors.
> Alternating between bottom-up and top-down placement keeps BOs near the
> ends of the VRAM and the available pages consecutively near the middle.
> 
> A real-world example with 16 MiB of VRAM is shown below.
> 
>   > cat /sys/kernel/debug/dri/0/vram-mm
>   0x-0x057f: 1407: free
>   0x057f-0x0b5b: 1500: used
>   0x0b5b-0x0ff0: 1173: free
> 
> The first free area was the location of the fbdev framebuffer. The used
> area is Weston's current framebuffer of 1500 pages. Weston now cannot
> do a pageflip to another 1500 page-wide framebuffer, even though enough
> pages are available. The patch resolves this problem to
> 
>   > cat /sys/kernel/debug/dri/0/vram-mm
>   0x-0x05dc: 1500: used
>   0x05dc-0x0a14: 1080: free
>   0x0a14-0x0ff0: 1500: used
> 
> with both of Weston's framebuffers located near the ends of the VRAM
> memory.

I don't think it is that simple.

First:  How will that interact with cursor bo allocations?  IIRC the
strategy for them is to allocate top-down, for similar reasons (avoid
small cursor bo allocs fragment vram memory).

Second:  I think ttm will move bo's from vram to system only on memory
pressure.  So you can still end up with fragmented memory.  To make the
scheme with one fb @ top and one @ bottom work reliable you have to be
more aggressive on pushing out framebuffers.

Third:  I'd suggest make topdown allocations depending on current state
instead of simply alternating, i.e. if there is a pinned framebuffer @
offset 0, then go for top-down.

I also think using this scheme should be optional.  In the simplest case
we can allow drivers opt-in.  Or we try do to something clever
automatically: using the strategy only for framebuffers larger than 1/4
or 1/3 of total vram memory (which makes alloc failures due to
fragmentation much more likely).

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v1] drm/bochs: fix an issue of ioremap() leak

2020-04-23 Thread Gerd Hoffmann
  Hi,

> I am a newbie, andy gave me some directions to submit the patch, eg: check
> ioremap leak. At this time, I found that the bochs driver may have similar
> problems, so I submitted this patch, then, Andy said the best is to switch
> this driver to use pcim _ * () functions and drop tons of legacy code.
> I think we might be able to fix this issue first, after that, drop tons
> of legacy code by pcim_*() functions. Can you give me some suggestions?
> thank you very much!

drm has drmm_* functions for that.  Daniel Vetter  has
a patch series pending switching lots of drivers over and IIRC it fixes
this bug too.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/1] drm/qxl: add mutex_lock/mutex_unlock to ensure the order in which resources are released.

2020-04-21 Thread Gerd Hoffmann
On Sat, Apr 18, 2020 at 02:39:17PM +0800, Caicai wrote:
> When a qxl resource is released, the list that needs to be released is
> fetched from the linked list ring and cleared. When you empty the list,
> instead of trying to determine whether the ttm buffer object for each
> qxl in the list is locked, you release the qxl object and remove the
> element from the list until the list is empty. It was found that the
> linked list was cleared first, and that the lock on the corresponding
> ttm Bo for the QXL had not been released, so that the new qxl could not
> be locked when it used the TTM.

So the dma_resv_reserve_shared() call in qxl_release_validate_bo() is
unbalanced?  Because the dma_resv_unlock() call in
qxl_release_fence_buffer_objects() never happens due to
qxl_release_free_list() clearing the list beforehand?  Is that correct?

The only way I see for this to happen is that the guest is preempted
between qxl_push_{cursor,command}_ring_release() and
qxl_release_fence_buffer_objects() calls.  The host can complete the qxl
command then, signal the guest, and the IRQ handler calls
qxl_release_free_list() before qxl_release_fence_buffer_objects() runs.

Looking through the code I think it should be safe to simply swap the
qxl_release_fence_buffer_objects() +
qxl_push_{cursor,command}_ring_release() calls to close that race
window.  Can you try that and see if it fixes the bug for you?

>   if (flush)
> - flush_work(>gc_work);
> + //can't flush work, it may lead to deadlock
> + usleep_range(500, 1000);
> +

The commit message doesn't explain this chunk.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 59/59] drm/bochs: Remove explicit drm_connector_register

2020-04-21 Thread Gerd Hoffmann
On Wed, Apr 15, 2020 at 09:40:34AM +0200, Daniel Vetter wrote:
> This is leftovers from the old drm_driver->load callback
> upside-down issues. It doesn't do anything for not-hotplugged
> connectors since drm_dev_register takes care of that.
> 
> Signed-off-by: Daniel Vetter 
> Cc: Gerd Hoffmann 
> Cc: virtualizat...@lists.linux-foundation.org

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 37/59] drm/cirrus: Move to drm/tiny

2020-04-21 Thread Gerd Hoffmann
On Wed, Apr 15, 2020 at 09:40:12AM +0200, Daniel Vetter wrote:
> Because it is.

Indeed.

Acked-by: Gerd Hoffmann 

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/10] Set up generic fbdev after registering device

2020-04-07 Thread Gerd Hoffmann
On Mon, Apr 06, 2020 at 03:43:55PM +0200, Thomas Zimmermann wrote:
> Generic fbdev emulation is a DRM client. If possible, it should behave
> like userspace clients. Therefore it should not run before the driver
> registered the new DRM device. If the setup function fails, the driver
> should not report an error.

This always has been my expectation, as you might have concluded from
the absence of any qemu guest driver patches in this series ;)

The patches look sane too, so for the series:
Acked-by: Gerd Hoffmann 

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: upstream boot error: KASAN: slab-out-of-bounds Write in virtio_gpu_object_create

2020-04-06 Thread Gerd Hoffmann
  Hi,

> > > +drivers/gpu/drm/virtio/virtgpu_object.c maintainers
> > > Now we have both mainline and linux-next boot broken (linux-next is
> > > broken for the past 40 days).
> > > No testing of new code happens.
> > >
> > > >  virtio_gpu_object_shmem_init 
> > > > drivers/gpu/drm/virtio/virtgpu_object.c:151 [inline]
> > > >  virtio_gpu_object_create+0x9f3/0xaa0 
> > > > drivers/gpu/drm/virtio/virtgpu_object.c:230
> >
> > Ah, that one.
> >
> > broken patch: f651c8b05542 ("drm/virtio: factor out the sg_table from 
> > virtio_gpu_object")
> > fixed by: 0666a8d7f6a4 ("drm/virtio: fix OOB in virtio_gpu_object_create")
> >
> > Both are in drm-misc-next.  I suspect the fix was added after
> > drm-misc-next was closed for the 5.7 merge window and thus should
> > have been submitted to drm-misc-next-fixes instead.
> >
> > So, what to do now?  Should I cherry-pick 0666a8d7f6a4 into
> > drm-misc-next-fixes?  Or should it go into drm-misc-fixes instead?
> 
> Yup cherry-pick it over, with -x, to drm-misc-next-fixes.
> -Daniel

Done.  So the next linux-next build should be green again.  mainline
should get the fix with the next drm pull (may take a few days).

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: upstream boot error: KASAN: slab-out-of-bounds Write in virtio_gpu_object_create

2020-04-06 Thread Gerd Hoffmann
On Mon, Apr 06, 2020 at 09:07:44AM +0200, Dmitry Vyukov wrote:
> On Mon, Apr 6, 2020 at 8:46 AM syzbot
>  wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit:ffc1c20c Merge tag 'for-5.7/dm-changes' of git://git.kerne..
> > git tree:   upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=1690471fe0
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=d6a1e2f9a9986236
> > dashboard link: https://syzkaller.appspot.com/bug?extid=d3a7951ed361037407db
> > compiler:   gcc (GCC) 9.0.0 20181231 (experimental)
> >
> > Unfortunately, I don't have any reproducer for this crash yet.
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: syzbot+d3a7951ed36103740...@syzkaller.appspotmail.com
> 
> 
> +drivers/gpu/drm/virtio/virtgpu_object.c maintainers
> Now we have both mainline and linux-next boot broken (linux-next is
> broken for the past 40 days).
> No testing of new code happens.
> 
> >  virtio_gpu_object_shmem_init drivers/gpu/drm/virtio/virtgpu_object.c:151 
> > [inline]
> >  virtio_gpu_object_create+0x9f3/0xaa0 
> > drivers/gpu/drm/virtio/virtgpu_object.c:230

Ah, that one.

broken patch: f651c8b05542 ("drm/virtio: factor out the sg_table from 
virtio_gpu_object")
fixed by: 0666a8d7f6a4 ("drm/virtio: fix OOB in virtio_gpu_object_create")

Both are in drm-misc-next.  I suspect the fix was added after
drm-misc-next was closed for the 5.7 merge window and thus should
have been submitted to drm-misc-next-fixes instead.

So, what to do now?  Should I cherry-pick 0666a8d7f6a4 into
drm-misc-next-fixes?  Or should it go into drm-misc-fixes instead?

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 31/44] drm/qxl: Don't use drm_device->dev_private

2020-04-06 Thread Gerd Hoffmann
On Fri, Apr 03, 2020 at 03:58:15PM +0200, Daniel Vetter wrote:
> Upcasting using a container_of macro is more typesafe, faster and
> easier for the compiler to optimize.
> 
> Signed-off-by: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: Gerd Hoffmann 
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: spice-de...@lists.freedesktop.org

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 30/44] drm/qxl: Use devm_drm_dev_alloc

2020-04-06 Thread Gerd Hoffmann
On Fri, Apr 03, 2020 at 03:58:14PM +0200, Daniel Vetter wrote:
> Also need to remove the drm_dev_put from the remove hook.
> 
> Signed-off-by: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: Gerd Hoffmann 
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: spice-de...@lists.freedesktop.org

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 5/5] drm/virtio: only call virtio_gpu_cmd_create_resource for dumb resources

2020-04-03 Thread Gerd Hoffmann
On Wed, Apr 01, 2020 at 03:30:39PM -0700, Gurchetan Singh wrote:
> We want to avoid this path for upcoming blob resources.

> - } else {
> + } else if (params->dumb) {

That should be posted as part of the actual blob resource patch series,
it doesn't make sense at all standalone.

The other patches 1-4 are fine, I'll go push them in a moment.

take care,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/vram-helpers: Merge code into a single file

2020-03-31 Thread Gerd Hoffmann
On Tue, Mar 31, 2020 at 10:12:38AM +0200, Thomas Zimmermann wrote:
> Most of the documentation was in an otherwise empty file, which was
> probably just left from a previous clean-up effort. So move code and
> documentation into a single file.
> 
> Signed-off-by: Thomas Zimmermann 

Acked-by: Gerd Hoffmann 

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/vram-helpers: Set plane fence for display update

2020-03-31 Thread Gerd Hoffmann
  Hi,

> +  * TODO: Synchronize with other users of the buffer. Buffers
> +  *   cannot be pinned to VRAM while they are in use by other
> +  *   drivers for DMA. We should probably wait for each GEM
> +  *   object's fence before attempting to pin the buffer.

The other option is p2p (Documentation/driver-api/pci/p2pdma.rst).

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 0/4] Support virtio cross-device resources

2020-03-20 Thread Gerd Hoffmann
On Wed, Mar 11, 2020 at 08:20:00PM +0900, David Stevens wrote:
> This patchset implements the current proposal for virtio cross-device
> resource sharing [1], with minor changes based on recent comments. It
> is expected that this will be used to import virtio resources into the
> virtio-video driver currently under discussion [2].
> 
> This patchset adds a new hook to dma-buf, for querying the dma-buf's
> underlying virtio UUID. This hook is then plumbed through DRM PRIME
> buffers, and finally implemented in virtgpu.

Looks all fine to me.  We should wait for the virtio protocol update
(patch 3/4) being accepted into the virtio specification.  When this is
done I'll go commit this to drm-misc-next.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/format_helper: Dual licence the header in GPL-2 and MIT

2020-03-19 Thread Gerd Hoffmann
On Fri, Mar 20, 2020 at 03:21:14AM +0100, Emmanuel Vadot wrote:
> Source file was dual licenced but the header was omitted, fix that.
> Contributors for this file are:
> Noralf Trønnes 
> Gerd Hoffmann 
> Thomas Gleixner 

Acked-by: Gerd Hoffmann 

> Signed-off-by: Emmanuel Vadot 
> ---
>  include/drm/drm_format_helper.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h
> index ac220aa1a245..7c5d4ffb2af2 100644
> --- a/include/drm/drm_format_helper.h
> +++ b/include/drm/drm_format_helper.h
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/* SPDX-License-Identifier: GPL-2.0 or MIT */
>  /*
>   * Copyright (C) 2016 Noralf Trønnes
>   */
> -- 
> 2.25.1
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/virtio: fix OOB in virtio_gpu_object_create

2020-03-19 Thread Gerd Hoffmann
On Thu, Mar 19, 2020 at 11:04:21AM +0100, Jiri Slaby wrote:
> After commit f651c8b05542, virtio_gpu_create_object allocates too small
> space to fit everything in. It is because it allocates struct
> virtio_gpu_object, but should allocate a newly added struct
> virtio_gpu_object_shmem which has 2 more members.
> 
> So fix that by using correct type in virtio_gpu_create_object.
> 
> Signed-off-by: Jiri Slaby 
> Fixes: f651c8b05542 ("drm/virtio: factor out the sg_table from 
> virtio_gpu_object")
> Cc: Gurchetan Singh 
> Cc: Gerd Hoffmann 

That was fast.  Yes, exactly this.  Pushed to drm-misc-next.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/2] drm/virtio: factor out the sg_table from virtio_gpu_object

2020-03-19 Thread Gerd Hoffmann
On Thu, Mar 19, 2020 at 10:32:25AM +0100, Jiri Slaby wrote:
> On 05. 03. 20, 2:32, Gurchetan Singh wrote:
> > A resource will be a shmem based resource or a (planned)
> > vram based resource, so it makes sense to factor out common fields
> > (resource handle, dumb).
> > 
> > v2: move mapped field to shmem object
> 
> Hi,
> 
> I bisected the slab-out-of-bounds below to this patch. Is it known?

No.  I suspect sizeof(virtio_gpu_object) instead of
sizeof(virtio_gpu_object_shmem) for allocation, I'll have a look ...

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3] drm/bochs: downgrade pci_request_region failure from error to warning

2020-03-18 Thread Gerd Hoffmann
On Tue, Mar 17, 2020 at 05:49:41PM +0100, Daniel Vetter wrote:
> On Fri, Mar 13, 2020 at 09:41:52AM +0100, Gerd Hoffmann wrote:
> > Shutdown of firmware framebuffer has a bunch of problems.  Because
> > of this the framebuffer region might still be reserved even after
> > drm_fb_helper_remove_conflicting_pci_framebuffers() returned.
> 
> Is that still the fbdev lifetime fun where the cleanup might be delayed if
> the char device node is still open?

Yes.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-18 Thread Gerd Hoffmann
  Hi,

> > > I am still catching up, but IIUC, indeed I don't think the host needs
> > > to depend on fence_id.  We should be able to repurpose fence_id.
> >
> > I'd rather ignore it altogether for FENCE_V2 (or whatever we call the
> > feature flag).
> 
> That's fine when one assumes each virtqueue has one host GPU timeline.
> But when there are multiple (e.g., multiplexing multiple contexts over
> one virtqueue, or multiple VkQueues), fence_id can be repurposed as
> the host timeline id.

Why do you need an id for that?  You can simply keep track of the
submit_3d commands instead.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-16 Thread Gerd Hoffmann
  Hi,

> >> At virtio level it is pretty simple:  The host completes the SUBMIT_3D
> >> virtio command when it finished rendering, period.
> >>
> >>
> >> On the guest side we don't need the fence_id.  The completion callback
> >> gets passed the virtio_gpu_vbuffer, so it can figure which command did
> >> actually complete without looking at virtio_gpu_ctrl_hdr->fence_id.
> >>
> >> On the host side we depend on the fence_id right now, but only because
> >> that is the way the virgl_renderer_callbacks->write_fence interface is
> >> designed.  We have to change that anyway for per-context (or whatever)
> >> fences, so it should not be a problem to drop the fence_id dependency
> >> too and just pass around an opaque pointer instead.
> 
> I am still catching up, but IIUC, indeed I don't think the host needs
> to depend on fence_id.  We should be able to repurpose fence_id.

I'd rather ignore it altogether for FENCE_V2 (or whatever we call the
feature flag).

> On the other hand, the VIRTIO_GPU_FLAG_FENCE flag is interesting, and
> it indicates that the vbuf is on the host GPU timeline instead of the
> host CPU timeline.

Yep, we have to keep that (unless we do command completion on GPU
timeline unconditionally with FENCE_V2).

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3] drm/bochs: downgrade pci_request_region failure from error to warning

2020-03-13 Thread Gerd Hoffmann
  Hi,

> > +   if (pci_request_region(pdev, 0, "bochs-drm") != 0)
> > +   DRM_WARN("Cannot request framebuffer, boot fb still active?\n");
> So you could use drm_WARN() which is what is preferred these days.

Nope, this isn't yet in -fixes.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3] drm/bochs: downgrade pci_request_region failure from error to warning

2020-03-13 Thread Gerd Hoffmann
Shutdown of firmware framebuffer has a bunch of problems.  Because
of this the framebuffer region might still be reserved even after
drm_fb_helper_remove_conflicting_pci_framebuffers() returned.

Don't consider pci_request_region() failure for the framebuffer
region as fatal error to workaround this issue.

Reported-by: Marek Marczykowski-Górecki 
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/bochs/bochs_hw.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/bochs/bochs_hw.c b/drivers/gpu/drm/bochs/bochs_hw.c
index 952199cc0462..dce4672e3fc8 100644
--- a/drivers/gpu/drm/bochs/bochs_hw.c
+++ b/drivers/gpu/drm/bochs/bochs_hw.c
@@ -157,10 +157,8 @@ int bochs_hw_init(struct drm_device *dev)
size = min(size, mem);
}
 
-   if (pci_request_region(pdev, 0, "bochs-drm") != 0) {
-   DRM_ERROR("Cannot request framebuffer\n");
-   return -EBUSY;
-   }
+   if (pci_request_region(pdev, 0, "bochs-drm") != 0)
+   DRM_WARN("Cannot request framebuffer, boot fb still active?\n");
 
bochs->fb_map = ioremap(addr, size);
if (bochs->fb_map == NULL) {
-- 
2.18.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-12 Thread Gerd Hoffmann
On Wed, Mar 11, 2020 at 04:36:16PM -0700, Gurchetan Singh wrote:
> On Wed, Mar 11, 2020 at 3:36 AM Gerd Hoffmann  wrote:
> 
> >   Hi,
> >
> > > I should've been more clear -- this is an internal cleanup/preparation
> > and
> > > the per-context changes are invisible to host userspace.
> >
> > Ok, it wasn't clear that you don't flip the switch yet.  In general the
> > commit messages could be a bit more verbose ...
> >
> > I'm wondering though why we need the new fence_id in the first place.
> > Isn't it enough to have per-context (instead of global) last_seq?
> >
> 
> Heh, that was to leave open the possibility of multiple timelines per
> context.  Roughly speaking,
> 
> V2 -- multiple processes
> V3 -- multiple processes and multiple threads (due to VK multi-threaded
> command buffers)
> 
> I think we all agree on V2.  It seems we still have to discuss V3
> (multi-queue, thread pools, a fence context associated with each thread) a
> bit more before we start landing pieces.

While thinking about the whole thing a bit more ...
Do we need virtio_gpu_ctrl_hdr->fence_id at all?

At virtio level it is pretty simple:  The host completes the SUBMIT_3D
virtio command when it finished rendering, period.

On the guest side we don't need the fence_id.  The completion callback
gets passed the virtio_gpu_vbuffer, so it can figure which command did
actually complete without looking at virtio_gpu_ctrl_hdr->fence_id.

On the host side we depend on the fence_id right now, but only because
that is the way the virgl_renderer_callbacks->write_fence interface is
designed.  We have to change that anyway for per-context (or whatever)
fences, so it should not be a problem to drop the fence_id dependency
too and just pass around an opaque pointer instead.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-12 Thread Gerd Hoffmann
  Hi,

> I will start with... how many timelines do we want to expose per
> context?  In my mind, it goes like
> 
> V1: 1 timeline per virtqueue (there is one timeline for ctrlq right now)
> V2: 1 timeline per context (VK and GL on different timelines)
> V3: N timelines per context (each VkQueue in a VK context gets a timeline?)
> V4: N+M timelines per context (each CPU thread also gets a timeline?!?!)
> 
> I certainly don't know if V4 is a good idea or not...

I'd expect apps use one VkQueue per thread, so v3 makes sense but v4 not
so much I think ...

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-12 Thread Gerd Hoffmann
  Hi,

> Can virtqueues be added dynamically?

No.

> Or can we have
> fixed but enough (e.g., 64) virtqueues?

Well, I wouldn't bet that any specific number is enough.  When gtk
starts rendering using vulkan by default the number of contexts of a
standard gnome desktop will be pretty high I guess ...

We can have a host-configurable number of virtqueues.  Typically the
guest can figure the number of available queues from config space.  One
common usage pattern (seen in block/net devices) is to have the number
of virtqueues equal the number of vcpus, so each vcpu has dedicated
virtqueue.

For virtio-gpu it is probably more useful to bind contexts to
virtqueues.  As long as we have enough virtqueues each context can
have its own.  Once we run out of virtqueues we have to start sharing.

On the host side it probably makes sense to have one process per
virtqueue.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-11 Thread Gerd Hoffmann
  Hi,

> I should've been more clear -- this is an internal cleanup/preparation and
> the per-context changes are invisible to host userspace.

Ok, it wasn't clear that you don't flip the switch yet.  In general the
commit messages could be a bit more verbose ...

I'm wondering though why we need the new fence_id in the first place.
Isn't it enough to have per-context (instead of global) last_seq?

> Multi-queue sounds very interesting indeed, especially with VK
> multi-threaded command submission.  That to me is V3 rather than V2.. let's
> start easy!

Having v2 if we plan to obsolete it with v3 soon doesn't look like a
good plan to me.  It'll make backward compatibility more complex for
no good reason ...

Also: Does virglrenderer render different contexts in parallel today?
Only in case it does we'll actually get benefits from per-context
fences.  But I think it doesn't, so there is no need to rush.

I think we should better have a rough plan for parallel rendering first,
then go start implementing the pieces needed.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Per context fencing ***

2020-03-10 Thread Gerd Hoffmann
On Mon, Mar 09, 2020 at 06:08:10PM -0700, Gurchetan Singh wrote:
> We don't want fences from different 3D contexts/processes (GL, VK) to
> be on the same timeline. Sending this out as a RFC to solicit feedback
> on the general approach.

NACK.

virtio fences are global, period.  You can't simply change fence
semantics like this.  At least not without a virtio protocol update
because guest and host need to be on the same page here.  Just look at
virgl_renderer_callbacks->write_fences() and how it doesn't consider
contexts at all.

So one way out would be to add a virtio feature flag for this, so guest
& host can negotiate whenever fences are global or per-context.

Another approach would be to go multiqueue, with each virtqueue being
roughly the same as a rendering pipeline on physical hardware, then have
per-virtqueue fences.

When going multiqueue we might also rethink the cursor queue approach.
I think it makes sense to simply allow sending cursor commands to any
queue then.  A separate cursor queue continues to be an option for the
guest then, but I'm not sure how useful that actually is in practice
given that cursor image updates are regular resource updates and have to
go through the control queue, so virtio_gpu_cursor_plane_update() has to
wait for the resource update finish before it can queue the cursor
command.  I suspect the idea to fast-track cursor updates with a
separate queue doesn't work that well in practice because of that.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/2] drm/virtio: factor out the sg_table from virtio_gpu_object

2020-03-09 Thread Gerd Hoffmann
On Wed, Mar 04, 2020 at 05:32:11PM -0800, Gurchetan Singh wrote:
> A resource will be a shmem based resource or a (planned)
> vram based resource, so it makes sense to factor out common fields
> (resource handle, dumb).
> 
> v2: move mapped field to shmem object

Pushed to drm-misc-next.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 6/8] drm/vram-helper: don't use ttm bo->offset v3

2020-03-05 Thread Gerd Hoffmann
On Thu, Mar 05, 2020 at 02:29:08PM +0100, Nirmoy Das wrote:
> Calculate GEM VRAM bo's offset within vram-helper without depending on
> bo->offset.
> 
> Signed-off-by: Nirmoy Das 
> Reviewed-by: Daniel Vetter 
> ---
>  drivers/gpu/drm/drm_gem_vram_helper.c | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c 
> b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 92a11bb42365..2749c2d25ac4 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -198,6 +198,13 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object 
> *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_mmap_offset);
> 
> +static s64 drm_gem_vram_pg_offset(struct drm_gem_vram_object *gbo)
> +{
> + if (WARN_ON_ONCE(!gbo->bo.mem.mm_node))
> + return 0;

returns 0 on error.

> + return gbo->bo.mem.start;
> +}
> +
>  /**
>   * drm_gem_vram_offset() - \
>   Returns a GEM VRAM object's offset in video memory
> @@ -214,7 +221,7 @@ s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo)
>  {
>   if (WARN_ON_ONCE(!gbo->pin_count))
>   return (s64)-ENODEV;

returns -errno on error.

> - return gbo->bo.offset;
> + return drm_gem_vram_pg_offset(gbo) << PAGE_SHIFT;

And given that one calls the other behavior on error should better be
consistent ...

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 2/2] drm/virtio: add case for shmem objects in virtio_gpu_cleanup_object(..)

2020-03-04 Thread Gerd Hoffmann
  Hi,

> + drm_gem_shmem_free_object(>base.base);
>   }
> +
>   virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);

use-after-free here.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [virtio-dev] [PATCH v2 4/4] drm/virtio: Support virtgpu exported resources

2020-03-04 Thread Gerd Hoffmann
On Tue, Mar 03, 2020 at 11:42:22AM +0900, David Stevens wrote:
> > cmd_p->hdr.ctx_id =
> >
> > Before this completion of this hypercall, this resource can be
> > considered context local, while afterward it can be considered
> > "exported".
> 
> Maybe I'm misunderstanding render contexts, but exporting a resource
> doesn't seem related to render contexts.

It isn't indeed.  Binding resources to contexts might need dma-buf
imports/exports on the host side, but that is another story and not
related to dma-buf exports inside the guest.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 4/4] drm/virtio: Support virtgpu exported resources

2020-03-04 Thread Gerd Hoffmann
  Hi,

> + if (vgdev->has_resource_assign_uuid) {
> + spin_lock(>resource_export_lock);
> + if (bo->uuid_state == UUID_NOT_INITIALIZED) {
> + bo->uuid_state = UUID_INITIALIZING;
> + needs_init = true;
> + }
> + spin_unlock(>resource_export_lock);
> +
> + if (needs_init) {
> + ret = virtio_gpu_cmd_resource_assign_uuid(vgdev, bo);

You can submit a fenced command, then wait on the fence here.  Removes
the need for UUID_INITIALIZING.

Also note that this function will be called only once, on the first
export.  When exporting the same object again drm will simply reuse
the existing dmabuf.  You can drop UUID_NOT_INITIALIZED and needs_init.

So you are left with only two uuid_state states.  You could turn uuid
into a pointer, so it gets only allocated when needed.  Also uuid ==
NULL can be used for "uuid not available" then.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 1/4] dma-buf: add support for virtio exported objects

2020-03-03 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 09:15:21PM +0900, David Stevens wrote:
> This change adds a new dma-buf operation that allows dma-bufs to be used
> by virtio drivers to share exported objects. The new operation allows
> the importing driver to query the exporting driver for the UUID which
> identifies the underlying exported object.
> 
> Signed-off-by: David Stevens 
> ---
>  drivers/dma-buf/dma-buf.c | 14 ++
>  include/linux/dma-buf.h   | 22 ++
>  2 files changed, 36 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index d4097856c86b..a04632284ec2 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1158,6 +1158,20 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void 
> *vaddr)
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_vunmap);
>  
> +#ifdef CONFIG_VIRTIO
> +int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid)

Hmm, I think I would drop the #ifdef

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/virtio: make sure virtio_gpu_cleanup_object(..) only happens on shmem objects

2020-03-03 Thread Gerd Hoffmann
> This function won't be useable for hostmem objects.

> @@ -526,7 +526,8 @@ static void virtio_gpu_cmd_unref_cb(struct 
> virtio_gpu_device *vgdev,
>   bo = vbuf->resp_cb_data;
>   vbuf->resp_cb_data = NULL;
>  
> - virtio_gpu_cleanup_object(bo);
> + if (bo && virtio_gpu_is_shmem(bo))
> + virtio_gpu_cleanup_object(bo);

Its not that simple, the virtio_gpu_resource_id_put() call in
virtio_gpu_cleanup_object() is needed for all objects.  We also
must free all objects.

I'd suggest to move the virtio_gpu_is_shmem() check to
virtio_gpu_cleanup_object().

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/virtio: factor out the sg_table from virtio_gpu_object

2020-03-03 Thread Gerd Hoffmann
  Hi,

>  struct virtio_gpu_object {
>   struct drm_gem_shmem_object base;
>   uint32_t hw_res_handle;
> -
> - struct sg_table *pages;
>   uint32_t mapped;
> -
>   bool dumb;
>   bool created;
>  };
>  #define gem_to_virtio_gpu_obj(gobj) \
>   container_of((gobj), struct virtio_gpu_object, base.base)
>  
> +struct virtio_gpu_object_shmem {
> + struct virtio_gpu_object base;
> + struct sg_table *pages;
> +};

mapped can be moved too.

> @@ -600,10 +600,11 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct 
> virtio_gpu_device *vgdev,

> + struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo);

Should we pass struct virtio_gpu_object_shmem to
virtio_gpu_cmd_transfer_to_host_2d (+friends) instead?

hostmem will not need transfers ...

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] [RFC] drm/virtgpu: modify uapi with stride/layer_stride fix

2020-03-03 Thread Gerd Hoffmann
On Fri, Feb 28, 2020 at 01:01:49PM -0800, Chia-I Wu wrote:
> On Wed, Oct 2, 2019 at 5:18 PM Gurchetan Singh
>  wrote:
> >
> > On Wed, Oct 2, 2019 at 1:49 AM Gerd Hoffmann  wrote:
> > >
> > > On Tue, Oct 01, 2019 at 06:49:35PM -0700, Gurchetan Singh wrote:
> > > > This doesn't really break userspace, since it always passes down
> > > > 0 for stride/layer_stride currently. We could:
> > > >
> > > > (1) modify UAPI now and add a VIRTGPU_PARAM_STRIDE_FIX feature
> > >
> > > This I think.
> > > But IMO it's not a fix, it is an added feature ...
> > >
> > > Also missing the big picture here.  Why do we need this?
> >
> > Two reasons:
> I don't fully get the picture, but drm_virtgpu_resource_create has
> stride.  Can we send that down in transfers?

It's unused and suddenly caring about it has a good chance to break
stuff ...

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 30/51] drm/cirrus: Fully embrace devm_

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 11:26:10PM +0100, Daniel Vetter wrote:
> With the drm_device lifetime fun cleaned up there's nothing in the way
> anymore to use devm_ for everything hw releated. Do it, and in the
> process, throw out the entire onion unwinding.

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 28/51] drm/bochs: Drop explicit drm_mode_config_cleanup

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 11:26:08PM +0100, Daniel Vetter wrote:
> Instead rely on the automatic clean, for which we just need to check
> that drm_mode_config_init succeeded. To avoid an inversion in the
> cleanup we also have to move the dev_private allocation over to
> drmm_kzalloc.
> 
> This is made possible by a preceeding patch which added a drmm_
> cleanup action to drm_mode_config_init(), hence all we need to do to
> ensure that drm_mode_config_cleanup() is run on final drm_device
> cleanup is check the new error code for _init().

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 29/51] drm/cirrus: Drop explicit drm_mode_config_cleanup call

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 11:26:09PM +0100, Daniel Vetter wrote:
> We can even delete the drm_driver.release hook now!
> 
> This is made possible by a preceeding patch which added a drmm_
> cleanup action to drm_mode_config_init(), hence all we need to do to
> ensure that drm_mode_config_cleanup() is run on final drm_device
> cleanup is check the new error code for _init().

Acked-by: Gerd Hoffmann 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 27/51] drm/bochs: Remove leftover drm_atomic_helper_shutdown

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 11:26:07PM +0100, Daniel Vetter wrote:
> Small mistake that crept into
> 
> commit 81da8c3b8d3df6f05b11300b7d17ccd1f3017fab
> Author: Gerd Hoffmann 
> Date:   Tue Feb 11 14:52:18 2020 +0100
> 
> drm/bochs: add drm_driver.release callback
> 
> where drm_atomic_helper_shutdown was left in both places. The
> ->release callback really shouldn't touch hardware.
> 
> Cc: Gerd Hoffmann 
> Signed-off-by: Daniel Vetter 

Acked-by: Gerd Hoffmann 

> ---
>  drivers/gpu/drm/bochs/bochs_kms.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c 
> b/drivers/gpu/drm/bochs/bochs_kms.c
> index 8066d7d370d5..e8cc8156d773 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -166,6 +166,5 @@ void bochs_kms_fini(struct bochs_device *bochs)
>   if (!bochs->dev->mode_config.num_connector)
>   return;
>  
> - drm_atomic_helper_shutdown(bochs->dev);
>   drm_mode_config_cleanup(bochs->dev);
>  }
> -- 
> 2.24.1
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 09/51] drm/cirrus: Use drmm_add_final_kfree

2020-03-02 Thread Gerd Hoffmann
> @@ -575,9 +574,12 @@ static int cirrus_pci_probe(struct pci_dev *pdev,
>  
>   dev = >dev;
>   ret = drm_dev_init(dev, _driver, >dev);
> - if (ret)
> - goto err_free_cirrus;
> + if (ret) {
> + kfree(cirrus);
> + goto err_pci_release;
> + }
>   dev->dev_private = cirrus;
> + drmm_add_final_kfree(dev, cirrus);

That doesn't look like an error path improvement.
With patch #30 applied it'll looks alot better though.
So maybe squash the patches?

In any case:
Acked-by: Gerd Hoffmann 

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 07/51] drm/qxl: Use drmm_add_final_kfree

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 11:25:47PM +0100, Daniel Vetter wrote:
> With this we can drop the final kfree from the release function.

Acked-by: Gerd Hoffmann 

> 
> Signed-off-by: Daniel Vetter 
> Cc: Dave Airlie 
> Cc: Gerd Hoffmann 
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: spice-de...@lists.freedesktop.org
> ---
>  drivers/gpu/drm/qxl/qxl_drv.c | 2 --
>  drivers/gpu/drm/qxl/qxl_kms.c | 2 ++
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
> index 4fda3f9b29f4..09102e2efabc 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.c
> +++ b/drivers/gpu/drm/qxl/qxl_drv.c
> @@ -144,8 +144,6 @@ static void qxl_drm_release(struct drm_device *dev)
>*/
>   qxl_modeset_fini(qdev);
>   qxl_device_fini(qdev);
> - dev->dev_private = NULL;
> - kfree(qdev);
>  }
>  
>  static void
> diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
> index 70b20ee4741a..09d7b5f6d172 100644
> --- a/drivers/gpu/drm/qxl/qxl_kms.c
> +++ b/drivers/gpu/drm/qxl/qxl_kms.c
> @@ -27,6 +27,7 @@
>  #include 
>  
>  #include 
> +#include 
>  #include 
>  
>  #include "qxl_drv.h"
> @@ -121,6 +122,7 @@ int qxl_device_init(struct qxl_device *qdev,
>   qdev->ddev.pdev = pdev;
>   pci_set_drvdata(pdev, >ddev);
>   qdev->ddev.dev_private = qdev;
> + drmm_add_final_kfree(>ddev, qdev);
>  
>   mutex_init(>gem.mutex);
>   mutex_init(>update_area_mutex);
> -- 
> 2.24.1
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 04/51] drm: Set final_kfree in drm_dev_alloc

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 11:25:44PM +0100, Daniel Vetter wrote:
> I also did a full review of all callers, and only the xen driver
> forgot to call drm_dev_put in the failure path. Fix that up too.
> 
> v2: I noticed that xen has a drm_driver.release hook, and uses
> drm_dev_alloc(). We need to remove the kfree from
> xen_drm_drv_release().
> 
> bochs also has a release hook, but leaked the drm_device ever since
> 
> commit 0a6659bdc5e8221da99eebb176fd9591435e38de
> Author: Gerd Hoffmann 
> Date:   Tue Dec 17 18:04:46 2013 +0100
> 
> drm/bochs: new driver
> 
> This patch here fixes that leak.
> 
> Same for virtio, started leaking with
> 
> commit b1df3a2b24a917f8853d43fe9683c0e360d2c33a
> Author: Gerd Hoffmann 
> Date:   Tue Feb 11 14:58:04 2020 +0100
> 
>     drm/virtio: add drm_driver.release callback.

Acked-by: Gerd Hoffmann 

> 
> Cc: Gerd Hoffmann 
> Cc: Oleksandr Andrushchenko 
> Cc: xen-de...@lists.xenproject.org
> 
> Reviewed-by: Oleksandr Andrushchenko 
> Signed-off-by: Daniel Vetter 
> Cc: Maarten Lankhorst 
> Cc: Maxime Ripard 
> Cc: Thomas Zimmermann 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Oleksandr Andrushchenko 
> Cc: xen-de...@lists.xenproject.org
> ---
>  drivers/gpu/drm/drm_drv.c   | 3 +++
>  drivers/gpu/drm/xen/xen_drm_front.c | 2 +-
>  2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index 153050fc926c..7b84ee8a5eb5 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -39,6 +39,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  
> @@ -819,6 +820,8 @@ struct drm_device *drm_dev_alloc(struct drm_driver 
> *driver,
>   return ERR_PTR(ret);
>   }
>  
> + drmm_add_final_kfree(dev, dev);
> +
>   return dev;
>  }
>  EXPORT_SYMBOL(drm_dev_alloc);
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c 
> b/drivers/gpu/drm/xen/xen_drm_front.c
> index 4be49c1aef51..d22b5da38935 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -461,7 +461,6 @@ static void xen_drm_drv_release(struct drm_device *dev)
>   drm_mode_config_cleanup(dev);
>  
>   drm_dev_fini(dev);
> - kfree(dev);
>  
>   if (front_info->cfg.be_alloc)
>   xenbus_switch_state(front_info->xb_dev,
> @@ -561,6 +560,7 @@ static int xen_drm_drv_init(struct xen_drm_front_info 
> *front_info)
>  fail_modeset:
>   drm_kms_helper_poll_fini(drm_dev);
>   drm_mode_config_cleanup(drm_dev);
> + drm_dev_put(drm_dev);
>  fail:
>   kfree(drm_info);
>   return ret;
> -- 
> 2.24.1
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/bochs: Remove vga write

2020-03-02 Thread Gerd Hoffmann
On Mon, Mar 02, 2020 at 02:14:02PM -0800, Alistair Francis wrote:
> On Fri, Feb 28, 2020 at 1:57 AM Gerd Hoffmann  wrote:
> >
> > On Thu, Feb 27, 2020 at 01:04:54PM -0800, Alistair Francis wrote:
> > > The QEMU model for the Bochs display has no VGA memory section at offset
> > > 0x400 [1]. By writing to this register Linux can create a write to
> > > unassigned memory which depending on machine and architecture can result
> > > in a store fault.
> > >
> > > I don't see any reference to this address at OSDev [2] or in the Bochs
> > > source code.
> > >
> > > Removing this write still allows graphics to work inside QEMU with
> > > the bochs-display.
> >
> > It's not that simple.  The driver also handles the qemu stdvga (-device
> > VGA, -device secondary-vga) which *does* need the vga port write.
> > There is no way for the guest to figure whenever the device is
> > secondary-vga or bochs-display.
> >
> > So how about fixing things on the host side?  Does qemu patch below
> > help?
> 
> That patch looks like it will fix the problem, but it doesn't seem
> like the correct fix. I would rather avoid adding a large chunk of
> dummy I/O to handle the two devices.

It's just a single handler for the parent mmio region, so we have a
defined default action instead of undefined behavior.

Patch just posted to qemu-devel, lets see what others think ...

> > Maybe another possible approach is to enable/disable vga access per
> > arch.  On x86 this doesn't cause any problems.  I guess you are on
> > risc-v?
> 
> I would prefer this option. I do see this on RISC-V, but I suspect the
> issue will appear on other architectures (although how they handle I/O
> failures in QEMU is a different story).
> 
> Can I just do the VGA write if x86?

I know ppc needs it too.  Not sure about other architectures.  I'd
suggest to do it the other way around: blacklist known-problematic
archs.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/shmem: drop pgprot_decrypted()

2020-02-28 Thread Gerd Hoffmann
Was added by commit 95cf9264d5f3 ("x86, drm, fbdev: Do not specify
encrypted memory for video mappings"), then it was kept through various
changes.

While vram actually needs decrypted mappings this is not correct for
shmem gem objects which live in main memory not io memory, so remove the
call.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index aad9324dcf4f..df31e5782eed 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -548,7 +548,6 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct 
vm_area_struct *vma)
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
if (!shmem->map_cached)
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
-   vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
vma->vm_ops = _gem_shmem_vm_ops;
 
return 0;
-- 
2.18.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

2020-02-28 Thread Gerd Hoffmann
On Fri, Feb 28, 2020 at 10:54:54AM +0100, Thomas Hellström (VMware) wrote:
> On 2/28/20 10:49 AM, Gerd Hoffmann wrote:
> >Hi,
> > 
> > > > Not clue about the others (lima, tiny, panfrost, v3d).  Maybe they use
> > > > write-combine just because this is what they got by default from
> > > > drm_gem_mmap_obj().  Maybe they actually need that.  Trying to Cc:
> > > > maintainters (and drop stable@).
> > > > virtio-gpu needs it, otherwise the host can't show the virtual display.
> > > > cirrus bounces everything via blits to vram, so it should be ok without
> > > > decrypted.  I guess that implies we should make decrypted configurable.
> > > Decrypted here is clearly incorrect and violates the SEV spec, regardless 
> > > of
> > > a config option.
> > > 
> > > The only correct way is currently to use dma_alloc_coherent() and
> > > mmap_coherent() to allocate decrypted memory and then use the
> > > pgprot_decrypted flag.
> > Hmm, virtio-gpu uses the dma api to allow the host access the gem
> > object.  So I think I have to correct the statement above, if I
> > understands things correctly the dma api will use (properly allocated)
> > decrypted bounce buffers and the virtio-gpu shmem objects don't need
> > pgprot_decrypted mappings.
> 
> Yes, that sounds more correct. I wonder whether the "pgprot_decrypted()"
> perhaps remains from mapping VRAM gem buffers...

Commit 95cf9264d5f3 ("x86, drm, fbdev: Do not specify encrypted memory
for video mappings") added it, then it was kept through various changes.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Refactor struct virtgpu_object ***

2020-02-28 Thread Gerd Hoffmann
  Hi,

> > struct virtgpu_object {
> 
> Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense.

It wasn't my intention to suggest a rename.  It's just that the kernel
is a bit inconsistent here and I picked the wrong name here.  Most
places use virtio_gpu but some use virtgpu (file names, ioctl api).

> > struct virtgpu_object_hostmem {
> > struct virtgpu_object base;
> > {offset, range};
> > (...)
> 
> I'm a kernel newbie, so it's not obvious to me why struct
> drm_gem_shmem_object would be a base class for struct
> virtgpu_object_hostmem?

I think it is easier to just continue using virtio_gpu_object in most
places and cast to virtio_gpu_object_{shmem,hostmem} only if needed.
Makes it easier to deal with common fields like hw_res_handle.

In the hostmem case we would simply not use the drm_gem_shmem_object
fields except for drm_gem_shmem_object.base (which is drm_gem_object).

> Side question: is drm_gem_object_funcs.vmap(..) /
> drm_gem_object_funcs.vunmap(..) even possible for hostmem?

Sure.  Using ioremap should work, after asking the host to map the
object at some location in the pci bar.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/bochs: Remove vga write

2020-02-28 Thread Gerd Hoffmann
On Thu, Feb 27, 2020 at 01:04:54PM -0800, Alistair Francis wrote:
> The QEMU model for the Bochs display has no VGA memory section at offset
> 0x400 [1]. By writing to this register Linux can create a write to
> unassigned memory which depending on machine and architecture can result
> in a store fault.
> 
> I don't see any reference to this address at OSDev [2] or in the Bochs
> source code.
> 
> Removing this write still allows graphics to work inside QEMU with
> the bochs-display.

It's not that simple.  The driver also handles the qemu stdvga (-device
VGA, -device secondary-vga) which *does* need the vga port write.
There is no way for the guest to figure whenever the device is
secondary-vga or bochs-display.

So how about fixing things on the host side?  Does qemu patch below
help?

Maybe another possible approach is to enable/disable vga access per
arch.  On x86 this doesn't cause any problems.  I guess you are on
risc-v?

cheers,
  Gerd

diff --git a/hw/display/bochs-display.c b/hw/display/bochs-display.c
index 62085f9fc063..e93e838243b8 100644
--- a/hw/display/bochs-display.c
+++ b/hw/display/bochs-display.c
@@ -151,6 +151,26 @@ static const MemoryRegionOps bochs_display_qext_ops = {
 .endianness = DEVICE_LITTLE_ENDIAN,
 };
 
+static uint64_t dummy_read(void *ptr, hwaddr addr, unsigned size)
+{
+return -1;
+}
+
+static void dummy_write(void *ptr, hwaddr addr,
+uint64_t val, unsigned size)
+{
+}
+
+static const MemoryRegionOps dummy_ops = {
+.read = dummy_read,
+.write = dummy_write,
+.valid.min_access_size = 1,
+.valid.max_access_size = 4,
+.impl.min_access_size = 1,
+.impl.max_access_size = 1,
+.endianness = DEVICE_LITTLE_ENDIAN,
+};
+
 static int bochs_display_get_mode(BochsDisplayState *s,
BochsDisplayMode *mode)
 {
@@ -284,8 +304,8 @@ static void bochs_display_realize(PCIDevice *dev, Error 
**errp)
 memory_region_init_io(>qext, obj, _display_qext_ops, s,
   "qemu extended regs", PCI_VGA_QEXT_SIZE);
 
-memory_region_init(>mmio, obj, "bochs-display-mmio",
-   PCI_VGA_MMIO_SIZE);
+memory_region_init_io(>mmio, obj, _ops, NULL,
+  "bochs-display-mmio", PCI_VGA_MMIO_SIZE);
 memory_region_add_subregion(>mmio, PCI_VGA_BOCHS_OFFSET, >vbe);
 memory_region_add_subregion(>mmio, PCI_VGA_QEXT_OFFSET, >qext);
 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

2020-02-28 Thread Gerd Hoffmann
  Hi,

> > Not clue about the others (lima, tiny, panfrost, v3d).  Maybe they use
> > write-combine just because this is what they got by default from
> > drm_gem_mmap_obj().  Maybe they actually need that.  Trying to Cc:
> > maintainters (and drop stable@).

> > virtio-gpu needs it, otherwise the host can't show the virtual display.
> > cirrus bounces everything via blits to vram, so it should be ok without
> > decrypted.  I guess that implies we should make decrypted configurable.
> 
> Decrypted here is clearly incorrect and violates the SEV spec, regardless of
> a config option.
> 
> The only correct way is currently to use dma_alloc_coherent() and
> mmap_coherent() to allocate decrypted memory and then use the
> pgprot_decrypted flag.

Hmm, virtio-gpu uses the dma api to allow the host access the gem
object.  So I think I have to correct the statement above, if I
understands things correctly the dma api will use (properly allocated)
decrypted bounce buffers and the virtio-gpu shmem objects don't need
pgprot_decrypted mappings.

That leaves the question what to do about pgprot_writecombine().  Any
comments from the driver maintainers (see first paragraph)?

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

2020-02-27 Thread Gerd Hoffmann
  Hi,

> > So I'd like to push patches 1+2 to -fixes and sort everything else later
> > in -next.  OK?
> 
> OK with me.

Done.

>> [ context: why shmem helpers use pgprot_writecombine + pgprot_decrypted?
>>we get conflicting mappings because of that, linear kernel
>>map vs. gem object vmap/mmap ]

> Do we have any idea what drivers are actually using
> write-combine and decrypted?

drivers/gpu/drm# find -name Kconfig* -print | xargs grep -l DRM_GEM_SHMEM_HELPER
./lima/Kconfig
./tiny/Kconfig
./cirrus/Kconfig
./Kconfig
./panfrost/Kconfig
./udl/Kconfig
./v3d/Kconfig
./virtio/Kconfig

virtio needs cached.
cirrus+udl should be ok with cached too.

Not clue about the others (lima, tiny, panfrost, v3d).  Maybe they use
write-combine just because this is what they got by default from
drm_gem_mmap_obj().  Maybe they actually need that.  Trying to Cc:
maintainters (and drop stable@).

On decrypted: I guess that is only relevant for virtual machines, i.e.
virtio-gpu and cirrus?

virtio-gpu needs it, otherwise the host can't show the virtual display.
cirrus bounces everything via blits to vram, so it should be ok without
decrypted.  I guess that implies we should make decrypted configurable.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Refactor struct virtgpu_object ***

2020-02-27 Thread Gerd Hoffmann
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like 
> this:

patches 1+2 cherry-picked and pushed to -next.

thanks,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

2020-02-27 Thread Gerd Hoffmann
  Hi,

> I think it might be safe for some integrated graphics where the driver
> maintainers can guarantee that it's safe on all particular processors used
> with that driver, but then IMO it should be moved out to those drivers.
> 
> Other drivers needing write-combine shouldn't really use shmem.
> 
> So again, to fix the regression, could we revert 0be895893607f ("drm/shmem:
> switch shmem helper to _gem_object_funcs.mmap") or does that have other
> implications?

This patch isn't a regression.  The old code path has the
pgprot_writecombine() call in drm_gem_mmap_obj(), so the behavior
is the same before and afterwards.

But with the patch in place we can easily have shmem helpers do their
own thing instead of depending on whatever drm_gem_mmap_obj() is doing.
Just using cached mappings unconditionally would be perfectly fine for
virtio-gpu.

Not sure about the other users though.  I'd like to fix the virtio-gpu
regression (coming from ttm -> shmem switch) asap, and I don't feel like
changing the behavior for other drivers in 5.6-rc is a good idea.

So I'd like to push patches 1+2 to -fixes and sort everything else later
in -next.  OK?

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

2020-02-26 Thread Gerd Hoffmann
  Hi,

> > +   if (!shmem->map_cached)
> > +   prot = pgprot_writecombine(prot);
> > shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
> > -   VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> > +   VM_MAP, prot)
> 
> 
> Wouldn't a vmap with pgprot_writecombine() create conflicting mappings with
> the linear kernel map which is not write-combined?

I think so, yes.

> Or do you change the linear kernel map of the shmem pages somewhere?

Havn't seen anything doing so while browsing the code.

> vmap bypassess at least the
> x86 PAT core mapping consistency check and this could potentially cause
> spuriously overwritten memory.

Well, I don't think the linear kernel map is ever used to access the
shmem gem objects.  So while this isn't exactly clean it shouldn't
cause problems in practice.

Suggestions how to fix that?

The reason I need cachable gem object mappings for virtio-gpu is because
we have a inconsistency between host (cached) and guest (wc) otherwise.

> > +   }
> > if (!shmem->vaddr) {
> > DRM_DEBUG_KMS("Failed to vmap pages\n");
> > @@ -540,7 +545,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, 
> > struct vm_area_struct *vma)
> > }
> > vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> > -   vma->vm_page_prot = 
> > pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> > +   vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> > +   if (!shmem->map_cached)
> > +   vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> 
> Same thing here. Note that vmf_insert_page() which is used by the fault
> handler also bypasses the x86 PAT  consistency check, whereas
> vmf_insert_mixed() doesn't.

vmap + mmap are consistent though, so this likewise shouldn't cause
issues in practice.

> > vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
> 
> At least with SME or SEV encryption, where shmem memory has its kernel map
> set to encrypted, creating conflicting mappings is explicitly disallowed.
> BTW, why is mmap mapping decrypted while vmap isn't?

Ok, that sounds like a real problem.  Have to check.

cheers,
  Gerd

PS: Given we are discussing pre-existing issues in the code I think the
series can be merged nevertheless.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 0/8] *** Refactor struct virtgpu_object ***

2020-02-26 Thread Gerd Hoffmann
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like 
> this:
> 
> struct virtio_gpu_shmem {
>     struct drm_gem_shmem_object base;
>     uint32_t hw_res_handle;
>     struct sg_table *pages;
>     (...)
> };
> 
> struct virtio_gpu_vram {
>     struct drm_gem_object base;  // or *drm_gem_vram_object
>     uint32_t hw_res_handle;
>     {offset, range};
>     (...)
> };

Given that we probably will not use drm_gem_vram_object and
drm_gem_shmem_object->base is drm_gem_object I think we can
go this route:

struct virtgpu_object {
struct drm_gem_shmem_object base;
enum object_type;
uint32_t hw_res_handle;
[ ... ]
};

struct virtgpu_object_shmem {
struct virtgpu_object base;
struct sg_table *pages;
[ ... ]
};

struct virtgpu_object_hostmem {
struct virtgpu_object base;
{offset, range};
(...)
};

Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem
from virtgpu_object via container_of which also sanity-check
object_type (maybe we can check drm_gem_object->ops for that instead of
adding a new field).

> Sending this out to solicit feedback on this approach.  Whichever approach
> we decide, landing incremental changes to internal structures is reduces
> rebasing costs and avoids mega-changes.

That certainly makes sense.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [virtio-dev] Re: [PATCH 1/2] virtio: add dma-buf support for exported objects

2020-02-26 Thread Gerd Hoffmann
On Wed, Feb 26, 2020 at 12:56:58PM +0900, David Stevens wrote:
> On Tue, Feb 25, 2020 at 3:10 PM Gerd Hoffmann  wrote:
> >
> > How about dma_buf_{get,set}_uuid, simliar to dma_buf_set_name?
> 
> While I'm not opposed to such an API, I'm also hesitant to make
> changes to the dma-buf API for a single use case.

See virtio-wayland discussion.  I expect we will see more cases show up.
Maybe this should even go one level up, to struct file.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Bug] virtio-gpu broken with qemu/kvm on arm64 on kernel 5.5+

2020-02-26 Thread Gerd Hoffmann
  Hi,

> > ... into 5.6-rc3 fixes the corruption for me.
> 
> I tried those 2 patches on top of kernel 5.6-rc3 and they indeed fix the 
> problem.
> 
> I applied them on top of 5.5.6 and they also fix the problem!
> Could we get those 2 patches applied to stable 5.5, please?

Series just re-posted.  Reviews welcome.  With some luck this makes
5.6-rc4, 5.6-rc5 otherwise, and should hit stable shortly thereafter,

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v5 2/3] drm/virtio: fix mmap page attributes

2020-02-26 Thread Gerd Hoffmann
virtio-gpu uses cached mappings, set
drm_gem_shmem_object.map_cached accordingly.

Cc: sta...@vger.kernel.org
Fixes: c66df701e783 ("drm/virtio: switch from ttm to gem shmem helpers")
Reported-by: Gurchetan Singh 
Reported-by: Guillaume Gardet 
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
b/drivers/gpu/drm/virtio/virtgpu_object.c
index 3d2a6d489bfc..59319435218f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -119,6 +119,7 @@ struct drm_gem_object *virtio_gpu_create_object(struct 
drm_device *dev,
return NULL;
 
bo->base.base.funcs = _gpu_gem_funcs;
+   bo->base.map_cached = true;
return >base.base;
 }
 
-- 
2.18.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v5 3/3] drm/udl: simplify gem object mapping.

2020-02-26 Thread Gerd Hoffmann
With shmem helpers allowing to update pgprot caching flags via
drm_gem_shmem_object.map_cached we can just use that and ditch
our own implementations of mmap() and vmap().

We also don't need a special case for imported objects, any map
requests are handled by the exporter not udl.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/udl/udl_gem.c | 62 ++-
 1 file changed, 3 insertions(+), 59 deletions(-)

diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
index b6e26f98aa0a..7e3a88b25b6b 100644
--- a/drivers/gpu/drm/udl/udl_gem.c
+++ b/drivers/gpu/drm/udl/udl_gem.c
@@ -17,72 +17,15 @@
  * GEM object funcs
  */
 
-static int udl_gem_object_mmap(struct drm_gem_object *obj,
-  struct vm_area_struct *vma)
-{
-   int ret;
-
-   ret = drm_gem_shmem_mmap(obj, vma);
-   if (ret)
-   return ret;
-
-   vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-   if (obj->import_attach)
-   vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
-   vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
-
-   return 0;
-}
-
-static void *udl_gem_object_vmap(struct drm_gem_object *obj)
-{
-   struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-   int ret;
-
-   ret = mutex_lock_interruptible(>vmap_lock);
-   if (ret)
-   return ERR_PTR(ret);
-
-   if (shmem->vmap_use_count++ > 0)
-   goto out;
-
-   ret = drm_gem_shmem_get_pages(shmem);
-   if (ret)
-   goto err_zero_use;
-
-   if (obj->import_attach)
-   shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
-   else
-   shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
-   VM_MAP, PAGE_KERNEL);
-
-   if (!shmem->vaddr) {
-   DRM_DEBUG_KMS("Failed to vmap pages\n");
-   ret = -ENOMEM;
-   goto err_put_pages;
-   }
-
-out:
-   mutex_unlock(>vmap_lock);
-   return shmem->vaddr;
-
-err_put_pages:
-   drm_gem_shmem_put_pages(shmem);
-err_zero_use:
-   shmem->vmap_use_count = 0;
-   mutex_unlock(>vmap_lock);
-   return ERR_PTR(ret);
-}
-
 static const struct drm_gem_object_funcs udl_gem_object_funcs = {
.free = drm_gem_shmem_free_object,
.print_info = drm_gem_shmem_print_info,
.pin = drm_gem_shmem_pin,
.unpin = drm_gem_shmem_unpin,
.get_sg_table = drm_gem_shmem_get_sg_table,
-   .vmap = udl_gem_object_vmap,
+   .vmap = drm_gem_shmem_vmap,
.vunmap = drm_gem_shmem_vunmap,
-   .mmap = udl_gem_object_mmap,
+   .mmap = drm_gem_shmem_mmap,
 };
 
 /*
@@ -101,6 +44,7 @@ struct drm_gem_object *udl_driver_gem_create_object(struct 
drm_device *dev,
 
obj = >base;
obj->funcs = _gem_object_funcs;
+   shmem->map_cached = true;
 
return obj;
 }
-- 
2.18.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v5 1/3] drm/shmem: add support for per object caching flags.

2020-02-26 Thread Gerd Hoffmann
Add map_cached bool to drm_gem_shmem_object, to request cached mappings
on a per-object base.  Check the flag before adding writecombine to
pgprot bits.

Cc: sta...@vger.kernel.org
Signed-off-by: Gerd Hoffmann 
---
 include/drm/drm_gem_shmem_helper.h |  5 +
 drivers/gpu/drm/drm_gem_shmem_helper.c | 15 +++
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index e34a7b7f848a..294b2931c4cc 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -96,6 +96,11 @@ struct drm_gem_shmem_object {
 * The address are un-mapped when the count reaches zero.
 */
unsigned int vmap_use_count;
+
+   /**
+* @map_cached: map object cached (instead of using writecombine).
+*/
+   bool map_cached;
 };
 
 #define to_drm_gem_shmem_obj(obj) \
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index a421a2eed48a..aad9324dcf4f 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -254,11 +254,16 @@ static void *drm_gem_shmem_vmap_locked(struct 
drm_gem_shmem_object *shmem)
if (ret)
goto err_zero_use;
 
-   if (obj->import_attach)
+   if (obj->import_attach) {
shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
-   else
+   } else {
+   pgprot_t prot = PAGE_KERNEL;
+
+   if (!shmem->map_cached)
+   prot = pgprot_writecombine(prot);
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
-   VM_MAP, pgprot_writecombine(PAGE_KERNEL));
+   VM_MAP, prot);
+   }
 
if (!shmem->vaddr) {
DRM_DEBUG_KMS("Failed to vmap pages\n");
@@ -540,7 +545,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct 
vm_area_struct *vma)
}
 
vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
-   vma->vm_page_prot = 
pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+   vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+   if (!shmem->map_cached)
+   vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
vma->vm_ops = _gem_shmem_vm_ops;
 
-- 
2.18.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v5 0/3] drm/virtio: fix mmap page attributes

2020-02-26 Thread Gerd Hoffmann
v5: rebase, add tags, add cc stable for 1+2, no code changes.
v4: back to v2-ish design, but simplified a bit.
v3: switch to drm_gem_object_funcs callback.
v2: make shmem helper caching configurable.

Gerd Hoffmann (3):
  drm/shmem: add support for per object caching flags.
  drm/virtio: fix mmap page attributes
  drm/udl: simplify gem object mapping.

 include/drm/drm_gem_shmem_helper.h  |  5 ++
 drivers/gpu/drm/drm_gem_shmem_helper.c  | 15 --
 drivers/gpu/drm/udl/udl_gem.c   | 62 ++---
 drivers/gpu/drm/virtio/virtgpu_object.c |  1 +
 4 files changed, 20 insertions(+), 63 deletions(-)

-- 
2.18.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: RFC: drm/virtio: Dummy virtio GPU

2020-02-25 Thread Gerd Hoffmann
> > No.
> >
> > First, what is wrong with vkms?
> The total lines of vkms driver is 1.2k+.

Which is small for a drm driver.

> I think it doesn't work along
> itself to provide things like  mmap on prime fd? (I tried it months
> ago).

Seems vkms only supports prime import, not prime export.
Maybe vgem works for you?

> And more, my "dummy virtio"
> device actually doesn't really depends on drm system so it's easier to
> back port to old kernel.

It depends on the virtio driver, which in turn *does* depend on drm
system.  So you have a indirect instead of a direct dependency.  I still
don't see the point ...

> > Second, if you really want something simple with the minimal set of drm
> > features possible you can write a rather small but still self-contained
> > driver when using all the drm helpers (shmem, simple display pipe) we
> > have these days.  Copy cirrus, strip it down: drop modesetting code,
> > drop blit-from-shmem-to-vram code, drop pci probing.  Maybe add module
> > options for max/default video mode.  Done.
> I need features like prime export/import, mmap on prime fd etc.

The shmem helpers support that just fine.

> And I'd like the code could work on different kernel version. So if go
> with this ways, the actually add more maintain cost in the long term?

Depends on which kernel version you need.  Backporting to 5.4-lts should
be easy, 4.4-lts lacks alot of the drm helpers though.

> > What is the use case btw?
> We have images works well under qemu + virtio vga, we'd like to run
> these images on public cloud service like Google GCE directly.

What do these images need a drm device for?
It seems not to be hardware-accelerated rendering.
It also seems not to be a virtual display.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Bug] virtio-gpu broken with qemu/kvm on arm64 on kernel 5.5+

2020-02-25 Thread Gerd Hoffmann
  Hi,

> > Perhaps try the entire series?
> > 
> > https://patchwork.kernel.org/cover/11300619/

Latest version is at:
   https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-no-wc

> Applied entire series on top of 5.5.6, but still the same problem.

Can you double-check?  Cherry-picking the shmem + virtio patches ...

  694c824491f1 drm/shmem: add support for per object caching flags.
  3f4d10f853fb drm/virtio: fix mmap page attributes

... into 5.6-rc3 fixes the corruption for me.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: RFC: drm/virtio: Dummy virtio GPU

2020-02-25 Thread Gerd Hoffmann
On Mon, Feb 24, 2020 at 03:01:54PM -0800, Lepton Wu wrote:
> Hi,
> 
> I'd like to get comments on this before I polish it. This is a 
> simple way to get something similar with vkms but it heavily reuse
> the code provided by virtio-gpu. Please feel free to give me any
> feedbacks or comments.

No.

First, what is wrong with vkms?

Second, if you really want something simple with the minimal set of drm
features possible you can write a rather small but still self-contained
driver when using all the drm helpers (shmem, simple display pipe) we
have these days.  Copy cirrus, strip it down: drop modesetting code,
drop blit-from-shmem-to-vram code, drop pci probing.  Maybe add module
options for max/default video mode.  Done.

What is the use case btw?

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/virtio: Support virtgpu exported resources

2020-02-24 Thread Gerd Hoffmann
  Hi,

> +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
> +  int flags)
> +{
[ ... ]
> +}
> +
> +struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
> + struct dma_buf *buf)
> +{
[ ... ]
> +}

More code duplication.

> diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
> index 0c85914d9369..9c428ef03060 100644
> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h

API change should go to a separate patch.

> +/*
> + * VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID
> + */
> +#define VIRTIO_GPU_F_CROSS_DEVICE2

Hmm, how about VIRTIO_GPU_F_RESOURCE_UUID ?

> @@ -87,6 +92,7 @@ enum virtio_gpu_ctrl_type {
>   VIRTIO_GPU_RESP_OK_CAPSET_INFO,
>   VIRTIO_GPU_RESP_OK_CAPSET,
>   VIRTIO_GPU_RESP_OK_EDID,
> + VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID,

The "assign" doesn't make sense in the reply.  I'd name that
VIRTIO_GPU_RESP_OK_RESOURCE_UUID or just VIRTIO_GPU_RESP_OK_UUID,

> +/* VIRTIO_GPU_RESP_OK_RESOURCE_ASSIGN_UUID */
> +struct virtio_gpu_resp_resource_assign_uuid {
> + struct virtio_gpu_ctrl_hdr hdr;
> + __u8 uuid[16];
> +};

Same here.

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] virtio: add dma-buf support for exported objects

2020-02-24 Thread Gerd Hoffmann
On Wed, Feb 19, 2020 at 05:06:36PM +0900, David Stevens wrote:
> This change adds a new flavor of dma-bufs that can be used by virtio
> drivers to share exported objects. A virtio dma-buf can be queried by
> virtio drivers to obtain the UUID which identifies the underlying
> exported object.

That duplicates a bunch of code from dma-buf.c in virtio_dma_buf.c.

How about dma_buf_{get,set}_uuid, simliar to dma_buf_set_name?

cheers,
  Gerd

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


<    1   2   3   4   5   6   7   8   9   10   >