Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Pekka Paalanen
On Mon, 12 Feb 2018 12:45:40 +0100
Gerd Hoffmann  wrote:

>   Hi,
> 
> > >(a) software rendering: client allocates shared memory buffer, renders
> > >into it, then passes a file handle for that shmem block together
> > >with some meta data (size, format, ...) to the wayland server.
> > > 
> > >(b) gpu rendering: client opens a render node, allocates a buffer,
> > >asks the cpu to renders into it, exports the buffer as dma-buf
> > >(DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
> > >(again including meta data of course).
> > > 
> > > Is that correct?  
> > 
> > Both are correct descriptions of typical behaviors. But it isn't spec'ed
> > anywhere who has to do the buffer allocation.  
> 
> Well, according to Pekka's reply it is spec'ed that way, for the
> existing buffer types.  So for server allocated buffers you need
> (a) a wayland protocol extension and (b) support for the extension
> in the clients.

Correct. Or simply a libEGL that uses such Wayland extension behind
everyone's back. I believe such things did at least exist, but are
probably not relevant for this discussion.

(If there is a standard library, like libEGL, loaded and used by both a
server and a client, that library can advertise custom private Wayland
protocol extensions and the client side can take advantage of them,
both without needing any code changes on either the server or the
client.)

> We also need a solution for the keymap shmem block.  I guess the keymap
> doesn't change all that often, so maybe it is easiest to just copy it
> over (host proxy -> guest proxy) instead of trying to map the host shmem
> into the guest?

Yes, I believe that would be a perfectly valid solution for that
particular case.


Thanks,
pq


pgphy6Dr5BKO_.pgp
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] headers: untangle kmemleak.h from mm.h

2018-02-12 Thread Randy Dunlap
On 02/12/2018 04:28 AM, Michael Ellerman wrote:
> Randy Dunlap  writes:
> 
>> From: Randy Dunlap 
>>
>> Currently  #includes  for no obvious
>> reason. It looks like it's only a convenience, so remove kmemleak.h
>> from slab.h and add  to any users of kmemleak_*
>> that don't already #include it.
>> Also remove  from source files that do not use it.
>>
>> This is tested on i386 allmodconfig and x86_64 allmodconfig. It
>> would be good to run it through the 0day bot for other $ARCHes.
>> I have neither the horsepower nor the storage space for the other
>> $ARCHes.
>>
>> [slab.h is the second most used header file after module.h; kernel.h
>> is right there with slab.h. There could be some minor error in the
>> counting due to some #includes having comments after them and I
>> didn't combine all of those.]
>>
>> This is Lingchi patch #1 (death by a thousand cuts, applied to kernel
>> header files).
>>
>> Signed-off-by: Randy Dunlap 
> 
> I threw it at a random selection of configs and so far the only failures
> I'm seeing are:
> 
>   lib/test_firmware.c:134:2: error: implicit declaration of function 'vfree' 
> [-Werror=implicit-function-declaration]   
>
>   lib/test_firmware.c:620:25: error: implicit declaration of function 
> 'vzalloc' [-Werror=implicit-function-declaration]
>   lib/test_firmware.c:620:2: error: implicit declaration of function 
> 'vzalloc' [-Werror=implicit-function-declaration]
>   security/integrity/digsig.c:146:2: error: implicit declaration of function 
> 'vfree' [-Werror=implicit-function-declaration]
> 
> Full results trickling in here, not all the failures there are caused by
> this patch, ie. some configs are broken in mainline:
> 
>   http://kisskb.ellerman.id.au/kisskb/head/13396/

That's very useful, thanks.

I'll send a few patches for those.

-- 
~Randy
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] headers: untangle kmemleak.h from mm.h

2018-02-12 Thread Randy Dunlap
On 02/12/2018 04:28 AM, Michael Ellerman wrote:
> Randy Dunlap  writes:
> 
>> From: Randy Dunlap 
>>
>> Currently  #includes  for no obvious
>> reason. It looks like it's only a convenience, so remove kmemleak.h
>> from slab.h and add  to any users of kmemleak_*
>> that don't already #include it.
>> Also remove  from source files that do not use it.
>>
>> This is tested on i386 allmodconfig and x86_64 allmodconfig. It
>> would be good to run it through the 0day bot for other $ARCHes.
>> I have neither the horsepower nor the storage space for the other
>> $ARCHes.
>>
>> [slab.h is the second most used header file after module.h; kernel.h
>> is right there with slab.h. There could be some minor error in the
>> counting due to some #includes having comments after them and I
>> didn't combine all of those.]
>>
>> This is Lingchi patch #1 (death by a thousand cuts, applied to kernel
>> header files).
>>
>> Signed-off-by: Randy Dunlap 
> 
> I threw it at a random selection of configs and so far the only failures
> I'm seeing are:
> 
>   lib/test_firmware.c:134:2: error: implicit declaration of function 'vfree' 
> [-Werror=implicit-function-declaration]   
>
>   lib/test_firmware.c:620:25: error: implicit declaration of function 
> 'vzalloc' [-Werror=implicit-function-declaration]
>   lib/test_firmware.c:620:2: error: implicit declaration of function 
> 'vzalloc' [-Werror=implicit-function-declaration]
>   security/integrity/digsig.c:146:2: error: implicit declaration of function 
> 'vfree' [-Werror=implicit-function-declaration]
> 

Both of those source files need to #include .

> Full results trickling in here, not all the failures there are caused by
> this patch, ie. some configs are broken in mainline:
> 
>   http://kisskb.ellerman.id.au/kisskb/head/13396/
> 
> cheers

:)

-- 
~Randy
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Gerd Hoffmann
> > I was more thinking about a struct containing enough info to allow the
> > proxy on the host side find the buffer, something like:
> > 
> > struct {
> >enum type { stdvga, virtio-cpu, ... }
> >pcislot device;
> >union {
> >   int stdvga_pcibar_offset;
> >   int virtio_gpu_resource_id;
> >}
> > }
> > 
> > So when the guest proxy gets a message with a fd referencing a buffer it
> > would have to figure where the buffer is, rewrite the message into the
> > struct above for the host proxy.  The host proxy would rewrite the
> > message again for the server.
> 
> What I don't understand yet is how we can keep the buffer descriptions
> together with the protocol data that references them.
> 
> With SCM_RIGHTS, the FDs are placed in the ancillary data that "travels"
> together with the protocol data that references them.

Place the buffer description into the wayland extension protocol messages?

i.e. have some wl_virt_proxy protocol extension.  Then, for the stdvga case:

  (1) client sends wl_drm/create_prime_buffer request to the guest proxy
  (2) guest proxy rewrites this into wl_virt_proxy/create_buffer, or
  maybe a create_stdvga_buffer request, carrying all the information
  listed above, and sends it to the host proxy.
  (3) host proxy rewrites it again into a wl_shm_pool/create_buffer and
  forwards it to the server.

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Tomeu Vizoso

On 02/12/2018 03:27 PM, Gerd Hoffmann wrote:

On Mon, Feb 12, 2018 at 03:00:24PM +0100, Tomeu Vizoso wrote:

On 02/12/2018 12:52 PM, Gerd Hoffmann wrote:

Hi,


can we reach agreement on whether vsock should be involved in this?


I think the best approach would be to have guest proxy and host proxy
use vsock for the wayland protocol.  Use a wayland protocol extension to
reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
proxies need to understand the extension, the client <=> guest proxy and
host proxy <=> server communication would be standard wayland protocol.


Thanks for the ideas. What I haven't understood yet is how you see the
actual passing of buffers via vsock. Are you thinking of using ancillary
data to pass FDs, or something else?


I was more thinking about a struct containing enough info to allow the
proxy on the host side find the buffer, something like:

struct {
   enum type { stdvga, virtio-cpu, ... }
   pcislot device;
   union {
  int stdvga_pcibar_offset;
  int virtio_gpu_resource_id;
   }
}

So when the guest proxy gets a message with a fd referencing a buffer it
would have to figure where the buffer is, rewrite the message into the
struct above for the host proxy.  The host proxy would rewrite the
message again for the server.


What I don't understand yet is how we can keep the buffer descriptions 
together with the protocol data that references them.


With SCM_RIGHTS, the FDs are placed in the ancillary data that "travels" 
together with the protocol data that references them.


With the present series, the DRM_IOCTL_VIRTGPU_WINSRV_TX ioctl struct has 
a field for the protocol data and an array of FDs.


How are you proposing to pass instances of that struct from above along 
the protocol data that refers to them?


Thanks,

Tomeu
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Gerd Hoffmann
On Mon, Feb 12, 2018 at 03:00:24PM +0100, Tomeu Vizoso wrote:
> On 02/12/2018 12:52 PM, Gerd Hoffmann wrote:
> >Hi,
> > 
> > > can we reach agreement on whether vsock should be involved in this?
> > 
> > I think the best approach would be to have guest proxy and host proxy
> > use vsock for the wayland protocol.  Use a wayland protocol extension to
> > reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
> > proxies need to understand the extension, the client <=> guest proxy and
> > host proxy <=> server communication would be standard wayland protocol.
> 
> Thanks for the ideas. What I haven't understood yet is how you see the
> actual passing of buffers via vsock. Are you thinking of using ancillary
> data to pass FDs, or something else?

I was more thinking about a struct containing enough info to allow the
proxy on the host side find the buffer, something like:

   struct {
  enum type { stdvga, virtio-cpu, ... }
  pcislot device;
  union {
 int stdvga_pcibar_offset;
 int virtio_gpu_resource_id;
  }
   }

So when the guest proxy gets a message with a fd referencing a buffer it
would have to figure where the buffer is, rewrite the message into the
struct above for the host proxy.  The host proxy would rewrite the
message again for the server.

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Tomeu Vizoso

On 02/12/2018 12:52 PM, Gerd Hoffmann wrote:

   Hi,


can we reach agreement on whether vsock should be involved in this?


I think the best approach would be to have guest proxy and host proxy
use vsock for the wayland protocol.  Use a wayland protocol extension to
reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
proxies need to understand the extension, the client <=> guest proxy and
host proxy <=> server communication would be standard wayland protocol.


Thanks for the ideas. What I haven't understood yet is how you see the 
actual passing of buffers via vsock. Are you thinking of using ancillary 
data to pass FDs, or something else?


Thanks,

Tomeu
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] headers: untangle kmemleak.h from mm.h

2018-02-12 Thread Michael Ellerman
Randy Dunlap  writes:

> From: Randy Dunlap 
>
> Currently  #includes  for no obvious
> reason. It looks like it's only a convenience, so remove kmemleak.h
> from slab.h and add  to any users of kmemleak_*
> that don't already #include it.
> Also remove  from source files that do not use it.
>
> This is tested on i386 allmodconfig and x86_64 allmodconfig. It
> would be good to run it through the 0day bot for other $ARCHes.
> I have neither the horsepower nor the storage space for the other
> $ARCHes.
>
> [slab.h is the second most used header file after module.h; kernel.h
> is right there with slab.h. There could be some minor error in the
> counting due to some #includes having comments after them and I
> didn't combine all of those.]
>
> This is Lingchi patch #1 (death by a thousand cuts, applied to kernel
> header files).
>
> Signed-off-by: Randy Dunlap 

I threw it at a random selection of configs and so far the only failures
I'm seeing are:

  lib/test_firmware.c:134:2: error: implicit declaration of function 'vfree' 
[-Werror=implicit-function-declaration] 
 
  lib/test_firmware.c:620:25: error: implicit declaration of function 'vzalloc' 
[-Werror=implicit-function-declaration]
  lib/test_firmware.c:620:2: error: implicit declaration of function 'vzalloc' 
[-Werror=implicit-function-declaration]
  security/integrity/digsig.c:146:2: error: implicit declaration of function 
'vfree' [-Werror=implicit-function-declaration]

Full results trickling in here, not all the failures there are caused by
this patch, ie. some configs are broken in mainline:

  http://kisskb.ellerman.id.au/kisskb/head/13396/

cheers
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Gerd Hoffmann
  Hi,

> can we reach agreement on whether vsock should be involved in this?

I think the best approach would be to have guest proxy and host proxy
use vsock for the wayland protocol.  Use a wayland protocol extension to
reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
proxies need to understand the extension, the client <=> guest proxy and
host proxy <=> server communication would be standard wayland protocol.

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-12 Thread Gerd Hoffmann
  Hi,

> >(a) software rendering: client allocates shared memory buffer, renders
> >into it, then passes a file handle for that shmem block together
> >with some meta data (size, format, ...) to the wayland server.
> > 
> >(b) gpu rendering: client opens a render node, allocates a buffer,
> >asks the cpu to renders into it, exports the buffer as dma-buf
> >(DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
> >(again including meta data of course).
> > 
> > Is that correct?
> 
> Both are correct descriptions of typical behaviors. But it isn't spec'ed
> anywhere who has to do the buffer allocation.

Well, according to Pekka's reply it is spec'ed that way, for the
existing buffer types.  So for server allocated buffers you need
(a) a wayland protocol extension and (b) support for the extension
in the clients.

> That's to say that if we cannot come up with a zero-copy solution for
> unmodified clients, we should at least support zero-copy for cooperative
> clients.

"cooperative clients" == "client which has support for the wayland
protocol extension", correct?

> > > Creation of shareable buffer by guest
> > > -
> > > 
> > > 1. Client requests virtio driver to create a buffer suitable for sharing
> > > with host (DRM_VIRTGPU_RESOURCE_CREATE)
> > 
> > client or guest proxy?
> 
> As per the above, the GUI toolkit could have been modified so the client
> directly creates a shareable buffer, and renders directly to it without any
> extra copies.
> 
> If clients cannot be modified, then it's the guest proxy what has to create
> the shareable buffer and keep it in sync with the client's non-shareable
> buffer at the right times, by intercepting wl_surface.commit messages and
> copying buffer contents.

Ok.

> > > 4. QEMU maps that buffer to the guest's address space
> > > (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
> > 
> > That part is problematic.  The host can't simply allocate something in
> > the physical address space, because most physical address space
> > management is done by the guest.  All pci bars are mapped by the guest
> > firmware for example (or by the guest OS in case of hotplug).
> 
> How can KVM_SET_USER_MEMORY_REGION ever be safely used then? I would have
> expected that callers of that ioctl have enough knowledge to be able to
> choose a physical address that won't conflict with the guest's kernel.

Depends on the kind of region.  Guest RAM is allocated and mapped by
qemu, guest firmware can query qemu about RAM mappings using a special
interface, then create a e820 memory map for the guest os.  PCI device
bars are mapped according to the pci config space registers, which in
turn are initialized by the guest firmware, so it is basically in the
guests hand where they show up.

> I see that the ivshmem device in QEMU registers the memory region in BAR 2
> of a PCI device instead. Would that be better in your opinion?

Yes.

> > > 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
> > > resource, sends data + FDs to the compositor with SCM_RIGHTS
> > 
> > BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
> > does the wayland protocol allow for offsets in buffer meta data, so you
> > can place multiple buffers in a single shmem block?
> 
> The latter:
> https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool

Ah, good, that makes it alot easier.

So, yes, using ivshmem would be one option.  Tricky part here is the
buffer management though.  It's just a raw piece of memory.  The guest
proxy could mmap the pci bar and manage it.  But then it is again either
unmodified guest + copying the data, or modified client (which requests
buffers from guest proxy) for zero-copy.

Another idea would be extending stdvga.  Basically qemu would have to
use shmem as backing storage for vga memory instead of anonymous memory,
so it would be very  simliar to ivshmem on the host side.  But on the
guest side we have a drm driver for it (bochs-drm).  So clients can
allocate dumb drm buffers for software rendering, and the buffer would
already be backed by a host shmem segment.  Given that wayland already
supports drm buffers for 3d rendering that could work without extending
the wayland protocol.  The client proxy would have to translate the drm
buffer into an pci bar offset and pass it to the host side.  The host
proxy could register the pci bar as wl_shm_pool, then just pass through
the offset to reference the individual buffers.

Drawback of both approaches would be that software rendering and gpu
rendering would use quite different code paths.

We also need a solution for the keymap shmem block.  I guess the keymap
doesn't change all that often, so maybe it is easiest to just copy it
over (host proxy -> guest proxy) instead of trying to map the host shmem
into the guest?

cheers,
  Gerd


Re: [PULL v2 1/1] virtio/s390: implement PM operations for virtio_ccw

2018-02-12 Thread Christian Borntraeger
Michael, Conny,
it seems that this patch did not make it into 4.16-rc1.



On 12/18/2017 05:21 PM, Cornelia Huck wrote:
> From: Christian Borntraeger 
> 
> Suspend/Resume to/from disk currently fails. Let us wire
> up the necessary callbacks. This is mostly just forwarding
> the requests to the virtio drivers. The only thing that
> has to be done in virtio_ccw itself is to re-set the
> virtio revision.
> 
> Suggested-by: Thomas Huth 
> Signed-off-by: Christian Borntraeger 
> Message-Id: <20171207141102.70190-2-borntrae...@de.ibm.com>
> Reviewed-by: David Hildenbrand 
> [CH: merged <20171218083706.223836-1-borntrae...@de.ibm.com> to fix
> !CONFIG_PM configs]
> Signed-off-by: Cornelia Huck 
> ---
>  drivers/s390/virtio/virtio_ccw.c | 29 +
>  1 file changed, 29 insertions(+)
> 
> diff --git a/drivers/s390/virtio/virtio_ccw.c 
> b/drivers/s390/virtio/virtio_ccw.c
> index b18fe2014cf2..985184ebda45 100644
> --- a/drivers/s390/virtio/virtio_ccw.c
> +++ b/drivers/s390/virtio/virtio_ccw.c
> @@ -1300,6 +1300,9 @@ static int virtio_ccw_cio_notify(struct ccw_device 
> *cdev, int event)
>   vcdev->device_lost = true;
>   rc = NOTIFY_DONE;
>   break;
> + case CIO_OPER:
> + rc = NOTIFY_OK;
> + break;
>   default:
>   rc = NOTIFY_DONE;
>   break;
> @@ -1312,6 +1315,27 @@ static struct ccw_device_id virtio_ids[] = {
>   {},
>  };
> 
> +#ifdef CONFIG_PM_SLEEP
> +static int virtio_ccw_freeze(struct ccw_device *cdev)
> +{
> + struct virtio_ccw_device *vcdev = dev_get_drvdata(&cdev->dev);
> +
> + return virtio_device_freeze(&vcdev->vdev);
> +}
> +
> +static int virtio_ccw_restore(struct ccw_device *cdev)
> +{
> + struct virtio_ccw_device *vcdev = dev_get_drvdata(&cdev->dev);
> + int ret;
> +
> + ret = virtio_ccw_set_transport_rev(vcdev);
> + if (ret)
> + return ret;
> +
> + return virtio_device_restore(&vcdev->vdev);
> +}
> +#endif
> +
>  static struct ccw_driver virtio_ccw_driver = {
>   .driver = {
>   .owner = THIS_MODULE,
> @@ -1324,6 +1348,11 @@ static struct ccw_driver virtio_ccw_driver = {
>   .set_online = virtio_ccw_online,
>   .notify = virtio_ccw_cio_notify,
>   .int_class = IRQIO_VIR,
> +#ifdef CONFIG_PM_SLEEP
> + .freeze = virtio_ccw_freeze,
> + .thaw = virtio_ccw_restore,
> + .restore = virtio_ccw_restore,
> +#endif
>  };
> 
>  static int __init pure_hex(char **cp, unsigned int *val, int min_digit,
> 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization