What happened to basic prime support of bochs driver?

2023-01-27 Thread lepton
Hi Gerd,

It seems in the latest kernel, there is no PRIME support for bochs-drm
driver, I've found that you have an old CL which adds basic prime
support to it.

https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1893205.html

Do you remember why it finally doesn't go through?

Thanks!


Re: Why we didn't use embedded gem object for virtio gpu when making ttm bo a gem bo subclass?

2021-08-15 Thread lepton
Hi Gerd,

Thanks for your reply. I was aware of that change, but need a fix for
5.4 kernel as a temp solution for now.
If the reason is just that you will move away from ttm soon,then I
guess a CL like http://crrev.com/c/3092457 should
work for 5.4, just hope I don't miss anything else.

Thanks!

On Sun, Aug 15, 2021 at 9:46 PM Gerd Hoffmann  wrote:
>
> On Fri, Aug 13, 2021 at 12:42:51PM -0700, lepton wrote:
> > Hi Gerd,
> >
> > We found a bug in 5.4 kernel and virtgpu_gem_prime_mmap doesn't work
> > because it references vma_node in gem_base object while ttm code
> > initialized vma_node in tbo.base object. I am wondering, in your
> > original serial:
> > https://patchwork.kernel.org/project/dri-devel/cover/20190805124310.3275-1-kra...@redhat.com/
> > (drm/ttm: make ttm bo a gem bo subclass), why you changed to use
> > embedded gem object for most gpu drivers but skipping virtio gpu? Is
> > there some specific reason?
>
> commit c66df701e783bc666593e6e665f13670760883ee
> Author: Gerd Hoffmann 
> Date:   Thu Aug 29 12:32:57 2019 +0200
>
> drm/virtio: switch from ttm to gem shmem helpers
>
> HTH,
>   Gerd
>


Why we didn't use embedded gem object for virtio gpu when making ttm bo a gem bo subclass?

2021-08-14 Thread lepton
Hi Gerd,

We found a bug in 5.4 kernel and virtgpu_gem_prime_mmap doesn't work
because it references vma_node in gem_base object while ttm code
initialized vma_node in tbo.base object. I am wondering, in your
original serial:
https://patchwork.kernel.org/project/dri-devel/cover/20190805124310.3275-1-kra...@redhat.com/
(drm/ttm: make ttm bo a gem bo subclass), why you changed to use
embedded gem object for most gpu drivers but skipping virtio gpu? Is
there some specific reason?

I am thinking about CL like this (http://crrev.com/c/3092457) to fix
it and not sure if I missed something.

Thanks for your help!


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread lepton
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
>
> drm_gem_dumb_map_offset():
> if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
> if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu 
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson 
> Cc: Lepton Wu 
> Cc: Daniel Vetter 
> Cc: Christian König 
> Cc: Thomas Hellström (Intel) 
> Cc:  # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +--
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> ret = 0;
> }
> mutex_unlock(>pages_lock);
> -   if (ret) {
> +   if (ret && obj->base.filp) {
> struct page *page;
>
> page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct 
> drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -   unsigned long size)
> +struct file *shmem,
> +unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> int ret;
Remove this, it's not used any more.
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object 
> *__vgem_gem_create(struct drm_device *dev,
> if (!obj)
> return ERR_PTR(-ENOMEM);
>
> -   ret = drm_gem_object_init(dev, >base, roundup(size, PAGE_SIZE));
> -   if (ret) {
> -   kfree(obj);
> -   return ERR_PTR(ret);
> -   }
> +   drm_gem_private_object_init(dev, >base, size);
> +   obj->base.filp = shmem;
>
> mutex_init(>pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct 
> drm_device *dev,
>   unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> +   struct file *shmem;
> int ret;
>
> -   obj = __vgem_gem_create(dev, size);
> -   if (IS_ERR(obj))
> +   size = roundup(size, PAGE_SIZE);
> +
> +   shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +   if (IS_ERR(shmem))
> +   return ERR_CAST(shmem);
> +
> +   obj = __vgem_gem_create(dev, shmem, size);
> +   if (IS_ERR(obj)) {
> +   fput(shmem);
> return ERR_CAST(obj);
> +   }
>
> ret = drm_gem_handle_create(file, >base, handle);
> if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object 
> *vgem_prime_import_sg_table(struct drm_device *dev,
> struct drm_vgem_gem_object *obj;
> int npages;
>
> -   obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +   obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
> if (IS_ERR(obj))
> return ERR_CAST(obj);
>
> --
> 2.27.0
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread lepton
On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson  wrote:
>
> Quoting lepton (2020-07-07 18:05:21)
> > On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  
> > wrote:
> > >
> > > If we assign obj->filp, we believe that the create vgem bo is native and
> > > allow direct operations like mmap() assuming it behaves as backed by a
> > > shmemfs inode. When imported from a dmabuf, the obj->pages are
> > > not always meaningful and the shmemfs backing store misleading.
> > >
> > > Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> > > and that rejects attempts to mmap an imported dmabuf,
> > What do you mean by "regular mmap access" here?  It looks like vgem is
> > using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
> > drm_gem_dumb_map_offset
>
> As I too found out, and so had to correct my story telling.
>
> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
> igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?
> -Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/vgem: Do not allocate backing shmemfs file for an import dmabuf object

2020-07-08 Thread lepton
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson  wrote:
>
> If we assign obj->filp, we believe that the create vgem bo is native and
> allow direct operations like mmap() assuming it behaves as backed by a
> shmemfs inode. When imported from a dmabuf, the obj->pages are
> not always meaningful and the shmemfs backing store misleading.
>
> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
> and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here?  It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset
>
> drm_gem_dumb_map_offset():
> if (obj->import_attach) return -EINVAL;
>
> So the only route by which we might accidentally allow mmapping of an
> imported buffer is via vgem_prime_mmap(), which checked for
> obj->filp assuming that it would be NULL.
>
> Well it would had it been updated to use the common
> drm_gem_dum_map_offset() helper, instead it has
>
> vgem_gem_dumb_map():
> if (!obj->filp) return -EINVAL;
>
> falling foul of the same trap as above.
>
> Reported-by: Lepton Wu 
> Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces")
> Signed-off-by: Chris Wilson 
> Cc: Lepton Wu 
> Cc: Daniel Vetter 
> Cc: Christian König 
> Cc: Thomas Hellström (Intel) 
> Cc:  # v4.13+
> ---
>  drivers/gpu/drm/vgem/vgem_drv.c | 27 +--
>  1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 909eba43664a..eb3b7cdac941 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
> ret = 0;
> }
> mutex_unlock(>pages_lock);
> -   if (ret) {
> +   if (ret && obj->base.filp) {
> struct page *page;
>
> page = shmem_read_mapping_page(
> @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct 
> drm_file *file)
>  }
>
>  static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev,
> -   unsigned long size)
> +struct file *shmem,
> +unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> int ret;
> @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object 
> *__vgem_gem_create(struct drm_device *dev,
> if (!obj)
> return ERR_PTR(-ENOMEM);
>
> -   ret = drm_gem_object_init(dev, >base, roundup(size, PAGE_SIZE));
> -   if (ret) {
> -   kfree(obj);
> -   return ERR_PTR(ret);
> -   }
> +   drm_gem_private_object_init(dev, >base, size);
> +   obj->base.filp = shmem;
>
> mutex_init(>pages_lock);
>
> @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct 
> drm_device *dev,
>   unsigned long size)
>  {
> struct drm_vgem_gem_object *obj;
> +   struct file *shmem;
> int ret;
>
> -   obj = __vgem_gem_create(dev, size);
> -   if (IS_ERR(obj))
> +   size = roundup(size, PAGE_SIZE);
> +
> +   shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE);
> +   if (IS_ERR(shmem))
> +   return ERR_CAST(shmem);
> +
> +   obj = __vgem_gem_create(dev, shmem, size);
> +   if (IS_ERR(obj)) {
> +   fput(shmem);
> return ERR_CAST(obj);
> +   }
>
> ret = drm_gem_handle_create(file, >base, handle);
> if (ret) {
> @@ -363,7 +370,7 @@ static struct drm_gem_object 
> *vgem_prime_import_sg_table(struct drm_device *dev,
> struct drm_vgem_gem_object *obj;
> int npages;
>
> -   obj = __vgem_gem_create(dev, attach->dmabuf->size);
> +   obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size);
> if (IS_ERR(obj))
> return ERR_CAST(obj);
>
> --
> 2.27.0
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC] drm/vgem: Don't use get_page in fault handler.

2020-07-07 Thread Lepton Wu
For pages which are allocated in ttm with transparent huge pages,
tail pages have zero as reference count. The current vgem code use
get_page on them and it will trigger BUG when release_pages get called
on those pages later.

Here I attach the minimal code to trigger this bug on a qemu VM which
enables virtio gpu (card1) and vgem (card0). BTW, since the upstream
virtio gpu has changed to use drm gem and moved away from ttm. So we
have to use an old kernel like 5.4 to reproduce it. But I guess
same thing could happen for a real GPU if the real GPU use similar code
path to allocate pages from ttm. I am not sure about two things: first, do we
need to fix this? will a real GPU hit this code path? Second, suppose this
need to be fixed, should this be fixed in ttm side or vgem side?  If we remove
"huge_flags &= ~__GFP_COMP" from ttm_get_pages, then get_page in vgem works
fine. But it's there for fix another bug:
https://bugs.freedesktop.org/show_bug.cgi?id=103138
It could also be "fixed" with this patch. But I am really not sure if this is
the way to go.

Here is the code to reproduce this bug:

unsigned int WIDTH = 1024;
unsigned int HEIGHT = 513;
unsigned int size = WIDTH * HEIGHT * 4;

int work(int vfd, int dfd, int handle) {
int ret;
struct drm_prime_handle hf = {.handle =  handle };
ret = ioctl(dfd, DRM_IOCTL_PRIME_HANDLE_TO_FD, );
fprintf(stderr, "fd is %d\n", hf.fd);
hf.flags = DRM_CLOEXEC | DRM_RDWR;
ret = ioctl(vfd, DRM_IOCTL_PRIME_FD_TO_HANDLE, );
fprintf(stderr, "new handle is %d\n", hf.handle);
struct drm_mode_map_dumb map = {.handle = hf.handle };
ret = ioctl(vfd, DRM_IOCTL_MODE_MAP_DUMB, );
fprintf(stderr, "need map at offset %lld\n", map.offset);
unsigned char * ptr = mmap(NULL, size, PROT_READ|PROT_WRITE, 
MAP_SHARED, vfd,
  map.offset);
memset(ptr, 2, size);
munmap(ptr, size);
}

int main()
{
int vfd = open("/dev/dri/card0", O_RDWR); // vgem
int dfd = open("/dev/dri/card1", O_RDWR); // virtio gpu

int ret;
struct drm_mode_create_dumb ct = {};

ct.height = HEIGHT;
ct.width = WIDTH;
ct.bpp = 32;
ret = ioctl(dfd, DRM_IOCTL_MODE_CREATE_DUMB, );
work(vfd, dfd, ct.handle);
fprintf(stderr, "done\n");
}

Signed-off-by: Lepton Wu 
---
 drivers/gpu/drm/vgem/vgem_drv.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index ec1a8ebb6f1b..be3d97e29804 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -87,9 +87,8 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf)
 
mutex_lock(>pages_lock);
if (obj->pages) {
-   get_page(obj->pages[page_offset]);
-   vmf->page = obj->pages[page_offset];
-   ret = 0;
+   ret = vmf_insert_pfn(vmf->vma, vmf->address,
+page_to_pfn(obj->pages[page_offset]));
}
mutex_unlock(>pages_lock);
if (ret) {
@@ -263,7 +262,6 @@ static struct drm_ioctl_desc vgem_ioctls[] = {
 
 static int vgem_mmap(struct file *filp, struct vm_area_struct *vma)
 {
-   unsigned long flags = vma->vm_flags;
int ret;
 
ret = drm_gem_mmap(filp, vma);
@@ -273,7 +271,6 @@ static int vgem_mmap(struct file *filp, struct 
vm_area_struct *vma)
/* Keep the WC mmaping set by drm_gem_mmap() but our pages
 * are ordinary and not special.
 */
-   vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP;
return 0;
 }
 
-- 
2.27.0.212.ge8ba1cc988-goog

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: RFC: drm/virtio: Dummy virtio GPU

2020-04-15 Thread lepton
On Fri, Mar 27, 2020 at 1:20 AM Gerd Hoffmann  wrote:
>
> > > Hmm, yes, I can see loopback virtio being useful for various cases.
> > > Testing being one.  A dummy virtio-gpu could be done too, or a more
> > > advanced version which exports the display as vnc.
> > So what's your suggestion on this? Changing this to drivers/virtio dir
> > and then add some user space api?  I am thinking about introducing a new
> > /dev/dummy-virtio, then user space can open it, use ioctl to create new 
> > virtio
> > hardware, and can handle the virtio traffic with read/write to simulate the
> > different virtio hardware.
>
> Yep, sounds useful.  You might want check out the uinput driver which
> does something simliar for input devices.  I wouldn't name this dummy,
> maybe uvirtio.
>
> cheers,
>   Gerd
>
Add back mail list so other people can follow.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: RFC: drm/virtio: Dummy virtio GPU

2020-02-26 Thread lepton
On Tue, Feb 25, 2020 at 2:29 AM Gerd Hoffmann  wrote:
>
> On Mon, Feb 24, 2020 at 03:01:54PM -0800, Lepton Wu wrote:
> > Hi,
> >
> > I'd like to get comments on this before I polish it. This is a
> > simple way to get something similar with vkms but it heavily reuse
> > the code provided by virtio-gpu. Please feel free to give me any
> > feedbacks or comments.
>
> No.
>
> First, what is wrong with vkms?
The total lines of vkms driver is 1.2k+. I think it doesn't work along
itself to provide things like  mmap on prime fd? (I tried it months
ago). Of course, that could be fixed, but then it will bring more
code.  While my "dummy
virtio" code is only around  100 lines. And more, my "dummy virtio"
device actually doesn't really depends on drm system so it's easier to
back port to old kernel.



>
>
> Second, if you really want something simple with the minimal set of drm
> features possible you can write a rather small but still self-contained
> driver when using all the drm helpers (shmem, simple display pipe) we
> have these days.  Copy cirrus, strip it down: drop modesetting code,
> drop blit-from-shmem-to-vram code, drop pci probing.  Maybe add module
> options for max/default video mode.  Done.
I need features like prime export/import, mmap on prime fd etc. And
I'd like the code could work on different kernel version. So if go
with this ways, the actually add more maintain cost in the long term?
since any
changes at drm frame work could need change to it.
>
> What is the use case btw?
We have images works well under qemu + virtio vga, we'd like to run
these images on public cloud service like Google GCE directly.  So I
got the idea that if some how we can run virtio stack without vmm
support. That
actually would help and also let the same image run on other cloud services.
>
> cheers,
>   Gerd
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] RFC: drm/virtio: Dummy virtio GPU

2020-02-25 Thread Lepton Wu
The idea here is: if we run the vm headless, we don't really need to
communicate with VMM, and we even don't need any VMM support
for virtio-gpu. Of course, only 2d works. But it's enough for some
use case. And this looks simpler than vkms.

Signed-off-by: Lepton Wu 
---
 drivers/gpu/drm/virtio/Kconfig |   9 ++
 drivers/gpu/drm/virtio/Makefile|   3 +
 drivers/gpu/drm/virtio/virtgpu_dummy.c | 161 +
 3 files changed, 173 insertions(+)
 create mode 100644 drivers/gpu/drm/virtio/virtgpu_dummy.c

diff --git a/drivers/gpu/drm/virtio/Kconfig b/drivers/gpu/drm/virtio/Kconfig
index eff3047052d4..9c18aace38ed 100644
--- a/drivers/gpu/drm/virtio/Kconfig
+++ b/drivers/gpu/drm/virtio/Kconfig
@@ -9,3 +9,12 @@ config DRM_VIRTIO_GPU
   QEMU based VMMs (like KVM or Xen).
 
   If unsure say M.
+
+config DRM_VIRTIO_GPU_DUMMY
+   tristate "Virtio dummy GPU driver"
+   depends on DRM_VIRTIO_GPU
+   help
+  This add a new virtio GPU device which handles the virtio ring 
buffers
+  inline so it doesn't rely on VMM to provide the virtio GPU device.
+  Currently it only handle VIRTIO_GPU_CMD_GET_DISPLAY_INFO which is 
enough
+  for a dummy 2D VGA device.
diff --git a/drivers/gpu/drm/virtio/Makefile b/drivers/gpu/drm/virtio/Makefile
index 92aa2b3d349d..26d8fee1bc41 100644
--- a/drivers/gpu/drm/virtio/Makefile
+++ b/drivers/gpu/drm/virtio/Makefile
@@ -8,4 +8,7 @@ virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o \
virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o
 
+virtio-gpu-dummy-y := virtgpu_dummy.o
+
 obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
+obj-$(CONFIG_DRM_VIRTIO_GPU_DUMMY) += virtio-gpu-dummy.o
diff --git a/drivers/gpu/drm/virtio/virtgpu_dummy.c 
b/drivers/gpu/drm/virtio/virtgpu_dummy.c
new file mode 100644
index ..8c2eb6fea47c
--- /dev/null
+++ b/drivers/gpu/drm/virtio/virtgpu_dummy.c
@@ -0,0 +1,161 @@
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "virtgpu_drv.h"
+
+static int virtgpu_dummy_width = 1024;
+static int virtgpu_dummy_height = 768;
+
+MODULE_PARM_DESC(width, "Dummy VGA width");
+module_param_named(width, virtgpu_dummy_width, int, 0400);
+MODULE_PARM_DESC(height, "Dummy VGA height");
+module_param_named(height, virtgpu_dummy_height, int, 0400);
+
+static struct bus_type dummy_bus = {
+   .name = "",
+};
+
+static struct dummy_gpu {
+   struct device *root;
+   struct virtio_device vdev;
+   unsigned char status;
+} dummy;
+
+static u64 dummy_get_features(struct virtio_device *vdev)
+{
+   return 1ULL << VIRTIO_F_VERSION_1;
+}
+
+static int dummy_finalize_features(struct virtio_device *vdev)
+{
+   return 0;
+}
+
+static void dummy_get(struct virtio_device *vdev, unsigned int offset,
+ void *buf, unsigned len)
+{
+   static struct virtio_gpu_config config = {
+   .num_scanouts = 1,
+   };
+   BUG_ON(offset + len > sizeof(config));
+   memcpy(buf, (char *) + offset, len);
+}
+
+static u8 dummy_get_status(struct virtio_device *vdev)
+{
+   struct dummy_gpu*  gpu = container_of(vdev, struct dummy_gpu, vdev);
+   return gpu->status;
+}
+
+static void dummy_set_status(struct virtio_device *vdev, u8 status)
+{
+   struct dummy_gpu*  gpu = container_of(vdev, struct dummy_gpu, vdev);
+   BUG_ON(!status);
+   gpu->status = status;
+}
+
+void process_cmd(struct vring_desc *desc, int idx)
+{
+   // FIXME, use chain to get resp buffer addr
+   char *buf = __va(desc[idx].addr);
+   struct virtio_gpu_vbuffer *vbuf =
+   (struct virtio_gpu_vbuffer *)(buf - sizeof(*vbuf));
+   struct virtio_gpu_ctrl_hdr *cmd_p = (struct virtio_gpu_ctrl_hdr *)buf;
+   struct virtio_gpu_resp_display_info *resp;
+   BUG_ON(vbuf->buf != buf);
+   if (cmd_p->type != cpu_to_le32(VIRTIO_GPU_CMD_GET_DISPLAY_INFO))
+   return;
+   BUG_ON(vbuf->resp_size != sizeof(struct virtio_gpu_resp_display_info));
+   resp = (struct virtio_gpu_resp_display_info *)vbuf->resp_buf;
+   resp->pmodes[0].r.width = virtgpu_dummy_width;
+   resp->pmodes[0].r.height = virtgpu_dummy_height;
+   resp->pmodes[0].enabled = 1;
+}
+
+static bool dummy_notify(struct virtqueue *vq)
+{
+   struct vring *r = (struct vring *)(vq + 1);
+   int used, avail;
+   // FIXME, handle multiple avail and also fix for big endian.
+   used = r->used->idx & (r->num - 1);
+   avail = (r->avail->idx - 1) & (r->num - 1);
+   r->used->ring[used].id = r->avail->ring[avail];
+   r->used->idx++;
+   if (!strcmp(vq->name, "control"))
+   process_cmd(r->desc, r->avail->ring[avail]);
+   vq->ca

RFC: drm/virtio: Dummy virtio GPU

2020-02-25 Thread Lepton Wu
Hi,

I'd like to get comments on this before I polish it. This is a 
simple way to get something similar with vkms but it heavily reuse
the code provided by virtio-gpu. Please feel free to give me any
feedbacks or comments.

Thanks!


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/ttm: let ttm_bo_wait timeout be configurable

2019-09-05 Thread Lepton Wu
When running dEQP against virgl driver, it turns out the default
15 seconds timeout for ttm_bo_wait is not big enough for
GLES31.functional.ssbo.layout.random.nested_structs_arrays_instance_arrays.22
Change it to a configurable value so we can tune it before virgl
performance gets improved.

Signed-off-by: Lepton Wu 
---
 drivers/gpu/drm/Kconfig  | 2 ++
 drivers/gpu/drm/ttm/Kconfig  | 7 +++
 drivers/gpu/drm/ttm/ttm_bo.c | 2 +-
 3 files changed, 10 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/ttm/Kconfig

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index bd943a71756c..432054012fa1 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -160,6 +160,8 @@ config DRM_TTM
  GPU memory types. Will be enabled automatically if a device driver
  uses it.
 
+source "drivers/gpu/drm/ttm/Kconfig"
+
 config DRM_GEM_CMA_HELPER
bool
depends on DRM
diff --git a/drivers/gpu/drm/ttm/Kconfig b/drivers/gpu/drm/ttm/Kconfig
new file mode 100644
index ..c7953271c59b
--- /dev/null
+++ b/drivers/gpu/drm/ttm/Kconfig
@@ -0,0 +1,7 @@
+config DRM_TTM_BO_WAIT_TIMEOUT
+   int "Default timeout for ttm bo wait (in seconds)"
+   depends on DRM_TTM
+   default 15
+   help
+ This option controls the default timeout (in seconds) used in
+ ttm_bo_wait
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 3f56647cdb35..fb6991811ede 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -1709,7 +1709,7 @@ EXPORT_SYMBOL(ttm_bo_unmap_virtual);
 int ttm_bo_wait(struct ttm_buffer_object *bo,
bool interruptible, bool no_wait)
 {
-   long timeout = 15 * HZ;
+   long timeout = CONFIG_DRM_TTM_BO_WAIT_TIMEOUT * HZ;
 
if (no_wait) {
if (reservation_object_test_signaled_rcu(bo->resv, true))
-- 
2.23.0.187.g17f5b7556c-goog

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH] drm/virtio: add create_handle support.

2017-11-14 Thread lepton
Ping.

On Wed, Nov 8, 2017 at 10:42 AM, Lepton Wu <ytht@gmail.com> wrote:
> Add create_handle support to virtio fb. Without this, screenshot tool
> in chromium OS can't work.
>
> Signed-off-by: Lepton Wu <ytht@gmail.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_display.c | 12 
>  1 file changed, 12 insertions(+)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c 
> b/drivers/gpu/drm/virtio/virtgpu_display.c
> index b6d52055a11f..274b4206ca96 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_display.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_display.c
> @@ -71,7 +71,19 @@ virtio_gpu_framebuffer_surface_dirty(struct 
> drm_framebuffer *fb,
> return virtio_gpu_surface_dirty(virtio_gpu_fb, clips, num_clips);
>  }
>
> +static int
> +virtio_gpu_framebuffer_create_handle(struct drm_framebuffer *fb,
> +struct drm_file *file_priv,
> +unsigned int *handle)
> +{
> +   struct virtio_gpu_framebuffer *virtio_gpu_fb =
> +   to_virtio_gpu_framebuffer(fb);
> +
> +   return drm_gem_handle_create(file_priv, virtio_gpu_fb->obj, handle);
> +}
> +
>  static const struct drm_framebuffer_funcs virtio_gpu_fb_funcs = {
> +   .create_handle = virtio_gpu_framebuffer_create_handle,
> .destroy = virtio_gpu_user_framebuffer_destroy,
> .dirty = virtio_gpu_framebuffer_surface_dirty,
>  };
> --
> 2.15.0.403.gc27cc4dac6-goog
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/virtio: add create_handle support.

2017-11-09 Thread Lepton Wu
Add create_handle support to virtio fb. Without this, screenshot tool
in chromium OS can't work.

Signed-off-by: Lepton Wu <ytht@gmail.com>
---
 drivers/gpu/drm/virtio/virtgpu_display.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c 
b/drivers/gpu/drm/virtio/virtgpu_display.c
index b6d52055a11f..274b4206ca96 100644
--- a/drivers/gpu/drm/virtio/virtgpu_display.c
+++ b/drivers/gpu/drm/virtio/virtgpu_display.c
@@ -71,7 +71,19 @@ virtio_gpu_framebuffer_surface_dirty(struct drm_framebuffer 
*fb,
return virtio_gpu_surface_dirty(virtio_gpu_fb, clips, num_clips);
 }
 
+static int
+virtio_gpu_framebuffer_create_handle(struct drm_framebuffer *fb,
+struct drm_file *file_priv,
+unsigned int *handle)
+{
+   struct virtio_gpu_framebuffer *virtio_gpu_fb =
+   to_virtio_gpu_framebuffer(fb);
+
+   return drm_gem_handle_create(file_priv, virtio_gpu_fb->obj, handle);
+}
+
 static const struct drm_framebuffer_funcs virtio_gpu_fb_funcs = {
+   .create_handle = virtio_gpu_framebuffer_create_handle,
.destroy = virtio_gpu_user_framebuffer_destroy,
.dirty = virtio_gpu_framebuffer_surface_dirty,
 };
-- 
2.15.0.403.gc27cc4dac6-goog

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/cirrus drm/virtio: add create_handle support.

2017-11-08 Thread Lepton Wu
Add create_handle support to cirrus and virtio fb which are used
in virtual machines. Without this, screenshot tool in chromium OS
can't work.

Signed-off-by: Lepton Wu <ytht@gmail.com>
---
 drivers/gpu/drm/cirrus/cirrus_main.c |  9 +
 drivers/gpu/drm/virtio/virtgpu_display.c | 12 
 2 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/cirrus/cirrus_main.c 
b/drivers/gpu/drm/cirrus/cirrus_main.c
index b5f528543956..26df1e8cd490 100644
--- a/drivers/gpu/drm/cirrus/cirrus_main.c
+++ b/drivers/gpu/drm/cirrus/cirrus_main.c
@@ -13,6 +13,14 @@
 
 #include "cirrus_drv.h"
 
+static int cirrus_create_handle(struct drm_framebuffer *fb,
+   struct drm_file* file_priv,
+   unsigned int* handle)
+{
+   struct cirrus_framebuffer *cirrus_fb = to_cirrus_framebuffer(fb);
+
+   return drm_gem_handle_create(file_priv, cirrus_fb->obj, handle);
+}
 
 static void cirrus_user_framebuffer_destroy(struct drm_framebuffer *fb)
 {
@@ -24,6 +32,7 @@ static void cirrus_user_framebuffer_destroy(struct 
drm_framebuffer *fb)
 }
 
 static const struct drm_framebuffer_funcs cirrus_fb_funcs = {
+   .create_handle = cirrus_create_handle,
.destroy = cirrus_user_framebuffer_destroy,
 };
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c 
b/drivers/gpu/drm/virtio/virtgpu_display.c
index b6d52055a11f..274b4206ca96 100644
--- a/drivers/gpu/drm/virtio/virtgpu_display.c
+++ b/drivers/gpu/drm/virtio/virtgpu_display.c
@@ -71,7 +71,19 @@ virtio_gpu_framebuffer_surface_dirty(struct drm_framebuffer 
*fb,
return virtio_gpu_surface_dirty(virtio_gpu_fb, clips, num_clips);
 }
 
+static int
+virtio_gpu_framebuffer_create_handle(struct drm_framebuffer *fb,
+struct drm_file *file_priv,
+unsigned int *handle)
+{
+   struct virtio_gpu_framebuffer *virtio_gpu_fb =
+   to_virtio_gpu_framebuffer(fb);
+
+   return drm_gem_handle_create(file_priv, virtio_gpu_fb->obj, handle);
+}
+
 static const struct drm_framebuffer_funcs virtio_gpu_fb_funcs = {
+   .create_handle = virtio_gpu_framebuffer_create_handle,
.destroy = virtio_gpu_user_framebuffer_destroy,
.dirty = virtio_gpu_framebuffer_surface_dirty,
 };
-- 
2.15.0.403.gc27cc4dac6-goog

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel