[virtio-dev] Re: [PATCH v1 08/12] drm/virtio: implement context init: stop using drv->context when creating fence

2021-09-14 Thread Gerd Hoffmann
On Wed, Sep 08, 2021 at 06:37:13PM -0700, Gurchetan Singh wrote:
> The plumbing is all here to do this.  Since we always use the
> default fence context when allocating a fence, this makes no
> functional difference.
> 
> We can't process just the largest fence id anymore, since it's
> it's associated with different timelines.  It's fine for fence_id
> 260 to signal before 259.  As such, process each fence_id
> individually.

I guess you need to also update virtio_gpu_fence_event_process()
then?  It currently has the strict ordering logic baked in ...

take care,
  Gerd


-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] [PATCH v1 09/12] drm/virtio: implement context init: allocate an array of fence contexts

2021-09-14 Thread Gurchetan Singh
On Tue, Sep 14, 2021 at 10:53 AM Chia-I Wu  wrote:

> ,On Mon, Sep 13, 2021 at 6:57 PM Gurchetan Singh
>  wrote:
> >
> >
> >
> >
> > On Mon, Sep 13, 2021 at 11:52 AM Chia-I Wu  wrote:
> >>
> >> .
> >>
> >> On Mon, Sep 13, 2021 at 10:48 AM Gurchetan Singh
> >>  wrote:
> >> >
> >> >
> >> >
> >> > On Fri, Sep 10, 2021 at 12:33 PM Chia-I Wu  wrote:
> >> >>
> >> >> On Wed, Sep 8, 2021 at 6:37 PM Gurchetan Singh
> >> >>  wrote:
> >> >> >
> >> >> > We don't want fences from different 3D contexts (virgl, gfxstream,
> >> >> > venus) to be on the same timeline.  With explicit context creation,
> >> >> > we can specify the number of ring each context wants.
> >> >> >
> >> >> > Execbuffer can specify which ring to use.
> >> >> >
> >> >> > Signed-off-by: Gurchetan Singh 
> >> >> > Acked-by: Lingfeng Yang 
> >> >> > ---
> >> >> >  drivers/gpu/drm/virtio/virtgpu_drv.h   |  3 +++
> >> >> >  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 34
> --
> >> >> >  2 files changed, 35 insertions(+), 2 deletions(-)
> >> >> >
> >> >> > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
> b/drivers/gpu/drm/virtio/virtgpu_drv.h
> >> >> > index a5142d60c2fa..cca9ab505deb 100644
> >> >> > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> >> >> > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> >> >> > @@ -56,6 +56,7 @@
> >> >> >  #define STATE_ERR 2
> >> >> >
> >> >> >  #define MAX_CAPSET_ID 63
> >> >> > +#define MAX_RINGS 64
> >> >> >
> >> >> >  struct virtio_gpu_object_params {
> >> >> > unsigned long size;
> >> >> > @@ -263,6 +264,8 @@ struct virtio_gpu_fpriv {
> >> >> > uint32_t ctx_id;
> >> >> > uint32_t context_init;
> >> >> > bool context_created;
> >> >> > +   uint32_t num_rings;
> >> >> > +   uint64_t base_fence_ctx;
> >> >> > struct mutex context_lock;
> >> >> >  };
> >> >> >
> >> >> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >> >> > index f51f3393a194..262f79210283 100644
> >> >> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >> >> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> >> >> > @@ -99,6 +99,11 @@ static int virtio_gpu_execbuffer_ioctl(struct
> drm_device *dev, void *data,
> >> >> > int in_fence_fd = exbuf->fence_fd;
> >> >> > int out_fence_fd = -1;
> >> >> > void *buf;
> >> >> > +   uint64_t fence_ctx;
> >> >> > +   uint32_t ring_idx;
> >> >> > +
> >> >> > +   fence_ctx = vgdev->fence_drv.context;
> >> >> > +   ring_idx = 0;
> >> >> >
> >> >> > if (vgdev->has_virgl_3d == false)
> >> >> > return -ENOSYS;
> >> >> > @@ -106,6 +111,17 @@ static int virtio_gpu_execbuffer_ioctl(struct
> drm_device *dev, void *data,
> >> >> > if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
> >> >> > return -EINVAL;
> >> >> >
> >> >> > +   if ((exbuf->flags & VIRTGPU_EXECBUF_RING_IDX)) {
> >> >> > +   if (exbuf->ring_idx >= vfpriv->num_rings)
> >> >> > +   return -EINVAL;
> >> >> > +
> >> >> > +   if (!vfpriv->base_fence_ctx)
> >> >> > +   return -EINVAL;
> >> >> > +
> >> >> > +   fence_ctx = vfpriv->base_fence_ctx;
> >> >> > +   ring_idx = exbuf->ring_idx;
> >> >> > +   }
> >> >> > +
> >> >> > exbuf->fence_fd = -1;
> >> >> >
> >> >> > virtio_gpu_create_context(dev, file);
> >> >> > @@ -173,7 +189,7 @@ static int virtio_gpu_execbuffer_ioctl(struct
> drm_device *dev, void *data,
> >> >> > goto out_memdup;
> >> >> > }
> >> >> >
> >> >> > -   out_fence = virtio_gpu_fence_alloc(vgdev,
> vgdev->fence_drv.context, 0);
> >> >> > +   out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx,
> ring_idx);
> >> >> > if(!out_fence) {
> >> >> > ret = -ENOMEM;
> >> >> > goto out_unresv;
> >> >> > @@ -691,7 +707,7 @@ static int
> virtio_gpu_context_init_ioctl(struct drm_device *dev,
> >> >> > return -EINVAL;
> >> >> >
> >> >> > /* Number of unique parameters supported at this time. */
> >> >> > -   if (num_params > 1)
> >> >> > +   if (num_params > 2)
> >> >> > return -EINVAL;
> >> >> >
> >> >> > ctx_set_params =
> memdup_user(u64_to_user_ptr(args->ctx_set_params),
> >> >> > @@ -731,6 +747,20 @@ static int
> virtio_gpu_context_init_ioctl(struct drm_device *dev,
> >> >> >
> >> >> > vfpriv->context_init |= value;
> >> >> > break;
> >> >> > +   case VIRTGPU_CONTEXT_PARAM_NUM_RINGS:
> >> >> > +   if (vfpriv->base_fence_ctx) {
> >> >> > +   ret = -EINVAL;
> >> >> > +   goto out_unlock;
> >> >> > +   }
> >> >> > +
> >> >> > +   if (value > MAX_RINGS) {
> >> >> > +   ret = -EINVAL;
> >> >> > +   goto o

Re: [virtio-dev] [PATCH v1 09/12] drm/virtio: implement context init: allocate an array of fence contexts

2021-09-14 Thread Chia-I Wu
,On Mon, Sep 13, 2021 at 6:57 PM Gurchetan Singh
 wrote:
>
>
>
>
> On Mon, Sep 13, 2021 at 11:52 AM Chia-I Wu  wrote:
>>
>> .
>>
>> On Mon, Sep 13, 2021 at 10:48 AM Gurchetan Singh
>>  wrote:
>> >
>> >
>> >
>> > On Fri, Sep 10, 2021 at 12:33 PM Chia-I Wu  wrote:
>> >>
>> >> On Wed, Sep 8, 2021 at 6:37 PM Gurchetan Singh
>> >>  wrote:
>> >> >
>> >> > We don't want fences from different 3D contexts (virgl, gfxstream,
>> >> > venus) to be on the same timeline.  With explicit context creation,
>> >> > we can specify the number of ring each context wants.
>> >> >
>> >> > Execbuffer can specify which ring to use.
>> >> >
>> >> > Signed-off-by: Gurchetan Singh 
>> >> > Acked-by: Lingfeng Yang 
>> >> > ---
>> >> >  drivers/gpu/drm/virtio/virtgpu_drv.h   |  3 +++
>> >> >  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 34 --
>> >> >  2 files changed, 35 insertions(+), 2 deletions(-)
>> >> >
>> >> > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
>> >> > b/drivers/gpu/drm/virtio/virtgpu_drv.h
>> >> > index a5142d60c2fa..cca9ab505deb 100644
>> >> > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
>> >> > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
>> >> > @@ -56,6 +56,7 @@
>> >> >  #define STATE_ERR 2
>> >> >
>> >> >  #define MAX_CAPSET_ID 63
>> >> > +#define MAX_RINGS 64
>> >> >
>> >> >  struct virtio_gpu_object_params {
>> >> > unsigned long size;
>> >> > @@ -263,6 +264,8 @@ struct virtio_gpu_fpriv {
>> >> > uint32_t ctx_id;
>> >> > uint32_t context_init;
>> >> > bool context_created;
>> >> > +   uint32_t num_rings;
>> >> > +   uint64_t base_fence_ctx;
>> >> > struct mutex context_lock;
>> >> >  };
>> >> >
>> >> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
>> >> > b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>> >> > index f51f3393a194..262f79210283 100644
>> >> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>> >> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>> >> > @@ -99,6 +99,11 @@ static int virtio_gpu_execbuffer_ioctl(struct 
>> >> > drm_device *dev, void *data,
>> >> > int in_fence_fd = exbuf->fence_fd;
>> >> > int out_fence_fd = -1;
>> >> > void *buf;
>> >> > +   uint64_t fence_ctx;
>> >> > +   uint32_t ring_idx;
>> >> > +
>> >> > +   fence_ctx = vgdev->fence_drv.context;
>> >> > +   ring_idx = 0;
>> >> >
>> >> > if (vgdev->has_virgl_3d == false)
>> >> > return -ENOSYS;
>> >> > @@ -106,6 +111,17 @@ static int virtio_gpu_execbuffer_ioctl(struct 
>> >> > drm_device *dev, void *data,
>> >> > if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
>> >> > return -EINVAL;
>> >> >
>> >> > +   if ((exbuf->flags & VIRTGPU_EXECBUF_RING_IDX)) {
>> >> > +   if (exbuf->ring_idx >= vfpriv->num_rings)
>> >> > +   return -EINVAL;
>> >> > +
>> >> > +   if (!vfpriv->base_fence_ctx)
>> >> > +   return -EINVAL;
>> >> > +
>> >> > +   fence_ctx = vfpriv->base_fence_ctx;
>> >> > +   ring_idx = exbuf->ring_idx;
>> >> > +   }
>> >> > +
>> >> > exbuf->fence_fd = -1;
>> >> >
>> >> > virtio_gpu_create_context(dev, file);
>> >> > @@ -173,7 +189,7 @@ static int virtio_gpu_execbuffer_ioctl(struct 
>> >> > drm_device *dev, void *data,
>> >> > goto out_memdup;
>> >> > }
>> >> >
>> >> > -   out_fence = virtio_gpu_fence_alloc(vgdev, 
>> >> > vgdev->fence_drv.context, 0);
>> >> > +   out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
>> >> > if(!out_fence) {
>> >> > ret = -ENOMEM;
>> >> > goto out_unresv;
>> >> > @@ -691,7 +707,7 @@ static int virtio_gpu_context_init_ioctl(struct 
>> >> > drm_device *dev,
>> >> > return -EINVAL;
>> >> >
>> >> > /* Number of unique parameters supported at this time. */
>> >> > -   if (num_params > 1)
>> >> > +   if (num_params > 2)
>> >> > return -EINVAL;
>> >> >
>> >> > ctx_set_params = 
>> >> > memdup_user(u64_to_user_ptr(args->ctx_set_params),
>> >> > @@ -731,6 +747,20 @@ static int virtio_gpu_context_init_ioctl(struct 
>> >> > drm_device *dev,
>> >> >
>> >> > vfpriv->context_init |= value;
>> >> > break;
>> >> > +   case VIRTGPU_CONTEXT_PARAM_NUM_RINGS:
>> >> > +   if (vfpriv->base_fence_ctx) {
>> >> > +   ret = -EINVAL;
>> >> > +   goto out_unlock;
>> >> > +   }
>> >> > +
>> >> > +   if (value > MAX_RINGS) {
>> >> > +   ret = -EINVAL;
>> >> > +   goto out_unlock;
>> >> > +   }
>> >> > +
>> >> > +   vfpriv->base_fence_ctx = 
>> >> > dma_fence_context_alloc(value);
>> >> With multiple fence contexts, we should do something about implicit 
>> 

[virtio-dev] Re: Enabling hypervisor agnosticism for VirtIO backends

2021-09-14 Thread Alex Bennée


Stefano Stabellini  writes:

> On Mon, 6 Sep 2021, AKASHI Takahiro wrote:
>> > the second is how many context switches are involved in a transaction.
>> > Of course with all things there is a trade off. Things involving the
>> > very tightest latency would probably opt for a bare metal backend which
>> > I think would imply hypervisor knowledge in the backend binary.
>> 
>> In configuration phase of virtio device, the latency won't be a big matter.
>> In device operations (i.e. read/write to block devices), if we can
>> resolve 'mmap' issue, as Oleksandr is proposing right now, the only issue is
>> how efficiently we can deliver notification to the opposite side. Right?
>> And this is a very common problem whatever approach we would take.
>> 
>> Anyhow, if we do care the latency in my approach, most of virtio-proxy-
>> related code can be re-implemented just as a stub (or shim?) library
>> since the protocols are defined as RPCs.
>> In this case, however, we would lose the benefit of providing "single binary"
>> BE.
>> (I know this is is an arguable requirement, though.)
>
> In my experience, latency, performance, and security are far more
> important than providing a single binary.
>
> In my opinion, we should optimize for the best performance and security,
> then be practical on the topic of hypervisor agnosticism. For instance,
> a shared source with a small hypervisor-specific component, with one
> implementation of the small component for each hypervisor, would provide
> a good enough hypervisor abstraction. It is good to be hypervisor
> agnostic, but I wouldn't go extra lengths to have a single binary.

I agree it shouldn't be a primary goal although a single binary working
with helpers to bridge the gap would make a cool demo. The real aim of
agnosticism is avoid having multiple implementations of the backend
itself for no other reason than a change in hypervisor.

> I cannot picture a case where a BE binary needs to be moved between
> different hypervisors and a recompilation is impossible (BE, not FE).
> Instead, I can definitely imagine detailed requirements on IRQ latency
> having to be lower than 10us or bandwidth higher than 500 MB/sec.
>
> Instead of virtio-proxy, my suggestion is to work together on a common
> project and common source with others interested in the same problem.
>
> I would pick something like kvmtool as a basis. It doesn't have to be
> kvmtools, and kvmtools specifically is GPL-licensed, which is
> unfortunate because it would help if the license was BSD-style for ease
> of integration with Zephyr and other RTOSes.

This does imply making some choices, especially the implementation
language. However I feel that C is really the lowest common denominator
here and I get the sense that people would rather avoid it if they could
given the potential security implications of a bug prone back end. This
is what is prompting interest in Rust.

> As long as the project is open to working together on multiple
> hypervisors and deployment models then it is fine. For instance, the
> shared source could be based on OpenAMP kvmtool [1] (the original
> kvmtool likely prefers to stay small and narrow-focused on KVM). OpenAMP
> kvmtool was created to add support for hypervisor-less virtio but they
> are very open to hypervisors too. It could be a good place to add a Xen
> implementation, a KVM fatqueue implementation, a Jailhouse
> implementation, etc. -- work together toward the common goal of a single
> BE source (not binary) supporting multiple different deployment models.
>
>
> [1] https://github.com/OpenAMP/kvmtool


-- 
Alex Bennée

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org