Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-22 Thread Gurchetan Singh
On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann  wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.

The memory type should probably be in resource_create_v2, not in the
reply.  The metadata will be different based on the memory heap it's
allocated from.

Also, not all heaps need to be exposed to the guest kernel.  For
example, device local memory in Vulkan could be un-mappable.  In fact,
for resources that are not host visible we might be better off
sidestepping the kernel altogether and tracking allocation in guest
userspace.

Here is an example of memory types the guest kernel may be interested
in (based on i965):

Type 0 --> DEVICE_LOCAL_BIT | HOST_VISIBLE_BIT | HOST_COHERENT_BIT |
HOST_CACHED_BIT | RENDERER_ALLOCATED (Vulkan)
Type 1 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | EXTERNAL_ALLOCATED
(gbm write combine)
Type 2 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | GUEST_ALLOCATED
(guest allocated memory, which I assume is also write combine)
Type 3 --> HOST_VISIBLE_BIT | HOST_CACHED | EXTERNAL_ALLOCATED (gbm
cached memory)



>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-19 Thread Chia-I Wu
On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann  wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
In Gurchetan's Vulkan example,  the host storage allocation happens in
(some variant of) memory_create, not in resource_create_v2.  Maybe
that's what got me confused.

>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
This makes sense, to support both Vulkan and non-Vulkan models.

This differs from this patch, but I think a full-fledged resource
should logically have three components

 - a RESOURCE component that has not storage
 - a MEMORY component that provides the storage
 - a BACKING component that is for transfers

resource_attach_backing sets the BACKING component.  BACKING always
uses guest pages and supports only transfers into or out of MEMORY.

resource_attach_memory sets the MEMORY component.  MEMORY can use host
or guest pages, and must always support GPU operations.  When a MEMORY
is mappable in the guest, we can skip BACKING and achieve zero-copy.

resource_create_* can then get a flag to indicate whether only
RESOURCE is created or RESOURCE+MEMORY is created.


>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
I thought vhost-user process can work with the host-allocated dmabuf
directly.  That is,

  qemu: injects dmabuf pages into guest address space
  vhost-user: work with the dmabuf
  guest: can read/write those pages

>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.
>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-19 Thread Chia-I Wu
Hi,

I am still new to virgl, and missed the last round of discussion about
resource_create_v2.

>From the discussion below, semantically resource_create_v2 creates a host
resource object _without_ any storage; memory_create creates a host memory
object which provides the storage.  Is that correct?

And this version of memory_create is probably the most special one among
its other potential variants, because it is the only(?) one who imports the
pre-allocated guest pages.

Do we expect these new commands to be supported by OpenGL, which does not
separate resources and memories?

On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann  wrote:

> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann 
> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +   struct virtio_gpu_ctrl_hdr hdr;
> > > > > +   __le32 resource_id;
> > > > > +   __le32 format;
> > > > > +   __le32 width;
> > > > > +   __le32 height;
> > > > > +   /* 3d only */
> > > > > +   __le32 target;
> > > > > +   __le32 bind;
> > > > > +   __le32 depth;
> > > > > +   __le32 array_size;
> > > > > +   __le32 last_level;
> > > > > +   __le32 nr_samples;
> > > > > +   __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without
> any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...

This might be another dumb question, but is this only an issue for
vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
the guest address space?



>
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
>
Guest-allocated memory regions can be just another memory type.

But one needs to create the resource first to know which memory types can
be attached to it.  I think the metadata needs to be returned with
resource_create_v2.

>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we ne

Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-17 Thread Gerd Hoffmann
On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> Hi,
> 
> I am still new to virgl, and missed the last round of discussion about
> resource_create_v2.
> 
> From the discussion below, semantically resource_create_v2 creates a host
> resource object _without_ any storage; memory_create creates a host memory
> object which provides the storage.  Is that correct?

Right now all resource_create_* variants create a resource object with
host storage.  memory_create creates guest storage, and
resource_attach_memory binds things together.  Then you have to transfer
the data.

Hmm, maybe we need a flag indicating that host storage is not needed,
for resources where we want establish some kind of shared mapping later
on.

> Do we expect these new commands to be supported by OpenGL, which does not
> separate resources and memories?

Well, for opengl you need a 1:1 relationship between memory region and
resource.

> > Yes, even though it is not clear yet how we are going to handle
> > host-allocated buffers in the vhost-user case ...
> 
> This might be another dumb question, but is this only an issue for
> vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> the guest address space?

qemu can change the address space, that includes mmap()ing stuff there.
An external vhost-user process can't do this, it can only read the
address space layout, and read/write from/to guest memory.

> But one needs to create the resource first to know which memory types can
> be attached to it.  I think the metadata needs to be returned with
> resource_create_v2.

There is a resource_info reply for that.

> That should be good enough.  But by returning alignments, we can minimize
> the gaps when attaching multiple resources, especially when the resources
> are only used by GPU.

We can add alignments to the resource_info reply.

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-11 Thread Gerd Hoffmann
On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann  wrote:
> >
> > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > +   struct virtio_gpu_ctrl_hdr hdr;
> > > > +   __le32 resource_id;
> > > > +   __le32 format;
> > > > +   __le32 width;
> > > > +   __le32 height;
> > > > +   /* 3d only */
> > > > +   __le32 target;
> > > > +   __le32 bind;
> > > > +   __le32 depth;
> > > > +   __le32 array_size;
> > > > +   __le32 last_level;
> > > > +   __le32 nr_samples;
> > > > +   __le32 flags;
> > > > +};
> > >
> > >
> > > I assume this is always backed by some host side allocation, without any
> > > guest side pages associated with it?
> >
> > No.  It is not backed at all yet.  Workflow would be like this:
> >
> >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> 
> Thanks for the clarification.
> 
> >
> > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > then go attach multiple resources to it.
> >
> > > If so, do we want the option for the guest allocate?
> >
> > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > the plan is to add other allocation types later on).
> 
> You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> dma-bufs with this, correct?  Let me know if it's a non-goal :-)

Yes, even though it is not clear yet how we are going to handle
host-allocated buffers in the vhost-user case ...
 
> If so, we might want to distinguish between memory types (kind of like
> memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]

For the host-allocated buffers we surely want that, yes.
For guest-allocated memory regions it isn't useful I think ...

> 1) Vulkan seems the most straightforward
> 
> virtio_gpu_cmd_memory_create --> create kernel data structure,
> vkAllocateMemory on the host or import guest memory into Vulkan,
> depending on the memory type
> virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> vkGetImageMemoryRequirements on host
> virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host

Yes.

Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
first to figure stride and size, then adjust memory size accordingly.

Note 2: The old virtio_gpu_cmd_resource_create variants can be used
too if you don't need the _v2 features.

Note 3: If I understand things correctly it would be valid to create a
memory pool (allocate one big chunk of memory) with vkAllocateMemory,
then bind multiple images at different offsets to it.

> 2) With a guest allocated dma-buf using some new allocation library,
> 
> virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> optimal allocation
> virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> it's guest memory type
> virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> resource in kernel, send iovecs to host for bookkeeping

virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.

> 3) With gbm it's a little trickier,
> 
> virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> get metadata in return

Only get metadata in return.

> virtio_gpu_cmd_memory_create --> create kernel data structure, but
> don't allocate pages, nothing on the host

Memory allocation happens here.  Probably makes sense to have a
virtio_gpu_cmd_memory_create_host command here, because the parameters
we need are quite different from the guest-allocated case.

Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
variant, given that gbm doesn't have raw memory buffers without any
format attached to it.

> > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > +struct virtio_gpu_resp_resource_info {
> > > > +   struct virtio_gpu_ctrl_hdr hdr;
> > > > +   __le32 stride[4];
> > > > +   __le32 size[4];
> > > > +};
> > >
> > > offsets[4] needed too
> >
> > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> 
> I assume the offsets aren't returned by
> VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.

Yes, they are send by the guest.

> How does the guest know at which offsets in memory will be compatible
> to share with display, camera, etc?

Is is good enough to align offsets to page boundaries?

> Also, do you want to cover the case where the resource is backed by
> three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?

Good point.  I guess we should make memory_id in
virtio_gpu_cmd_resource_attach_memory an array then,
so you can specify a different memory region for each plane.

cheers,
  Gerd

___
Virtualization mailing list
Virtualizatio

Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

2019-04-10 Thread Gerd Hoffmann
> > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > +struct virtio_gpu_cmd_resource_create_v2 {
> > +   struct virtio_gpu_ctrl_hdr hdr;
> > +   __le32 resource_id;
> > +   __le32 format;
> > +   __le32 width;
> > +   __le32 height;
> > +   /* 3d only */
> > +   __le32 target;
> > +   __le32 bind;
> > +   __le32 depth;
> > +   __le32 array_size;
> > +   __le32 last_level;
> > +   __le32 nr_samples;
> > +   __le32 flags;
> > +};
> 
> 
> I assume this is always backed by some host side allocation, without any
> guest side pages associated with it?

No.  It is not backed at all yet.  Workflow would be like this:

  (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
  (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
  (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)

You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
then go attach multiple resources to it.

> If so, do we want the option for the guest allocate?

Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
(initially guest allocated only, i.e. what virtio-gpu supports today,
the plan is to add other allocation types later on).

> > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > +struct virtio_gpu_resp_resource_info {
> > +   struct virtio_gpu_ctrl_hdr hdr;
> > +   __le32 stride[4];
> > +   __le32 size[4];
> > +};
> 
> offsets[4] needed too

That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization