Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Rob Clark
On Thu, Mar 31, 2022 at 12:41 PM Dmitry Osipenko
 wrote:
>
> On 3/31/22 22:02, Rob Clark wrote:
> > On Thu, Mar 31, 2022 at 11:52 AM Dmitry Osipenko
> >  wrote:
> >>
> >> ...
> >>> +/*
> >>> + * Get the requested iova but don't pin it.  Fails if the requested iova 
> >>> is
> >>> + * not available.  Doesn't need a put because iovas are currently valid 
> >>> for
> >>> + * the life of the object.
> >>> + *
> >>> + * Setting an iova of zero will clear the vma.
> >>> + */
> >>> +int msm_gem_set_iova(struct drm_gem_object *obj,
> >>> +  struct msm_gem_address_space *aspace, uint64_t iova)
> >>> +{
> >>> + int ret = 0;
> >>
> >> nit: No need to initialize the ret
> >
> > actually, we do
>
> Indeed, sorry :)
>
> ...
> >>>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
> >>>   struct msm_gem_address_space *aspace, uint64_t *iova,
> >>>   u64 range_start, u64 range_end);
> >> nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
> >> :) The uint64_t variant shouldn't be used by kernel code in general and
> >> checkpatch should want about it.
> >
> > one of many things that I disagree with checkpatch about ;-)
> >
> > I prefer standard types to custom ones.  I _kinda_ get the argument in
> > case of uapi (but IMHO that doesn't apply to how drm uapi headers are
> > used)
>
> I'd understand if it was all either uint64_t or u64, but the mix.. hm.

yeah, fair, we could be a bit more consistent

BR,
-R


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Dmitry Osipenko
On 3/31/22 22:02, Rob Clark wrote:
> On Thu, Mar 31, 2022 at 11:52 AM Dmitry Osipenko
>  wrote:
>>
>> ...
>>> +/*
>>> + * Get the requested iova but don't pin it.  Fails if the requested iova is
>>> + * not available.  Doesn't need a put because iovas are currently valid for
>>> + * the life of the object.
>>> + *
>>> + * Setting an iova of zero will clear the vma.
>>> + */
>>> +int msm_gem_set_iova(struct drm_gem_object *obj,
>>> +  struct msm_gem_address_space *aspace, uint64_t iova)
>>> +{
>>> + int ret = 0;
>>
>> nit: No need to initialize the ret
> 
> actually, we do

Indeed, sorry :)

...
>>>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
>>>   struct msm_gem_address_space *aspace, uint64_t *iova,
>>>   u64 range_start, u64 range_end);
>> nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
>> :) The uint64_t variant shouldn't be used by kernel code in general and
>> checkpatch should want about it.
> 
> one of many things that I disagree with checkpatch about ;-)
> 
> I prefer standard types to custom ones.  I _kinda_ get the argument in
> case of uapi (but IMHO that doesn't apply to how drm uapi headers are
> used)

I'd understand if it was all either uint64_t or u64, but the mix.. hm.


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Rob Clark
On Thu, Mar 31, 2022 at 11:52 AM Dmitry Osipenko
 wrote:
>
> ...
> > +/*
> > + * Get the requested iova but don't pin it.  Fails if the requested iova is
> > + * not available.  Doesn't need a put because iovas are currently valid for
> > + * the life of the object.
> > + *
> > + * Setting an iova of zero will clear the vma.
> > + */
> > +int msm_gem_set_iova(struct drm_gem_object *obj,
> > +  struct msm_gem_address_space *aspace, uint64_t iova)
> > +{
> > + int ret = 0;
>
> nit: No need to initialize the ret

actually, we do

> > + msm_gem_lock(obj);
> > + if (!iova) {
> > + ret = clear_iova(obj, aspace);
> > + } else {
> > + struct msm_gem_vma *vma;
> > + vma = get_vma_locked(obj, aspace, iova, iova + obj->size);
> > + if (IS_ERR(vma)) {
> > + ret = PTR_ERR(vma);
> > + } else if (GEM_WARN_ON(vma->iova != iova)) {
> > + clear_iova(obj, aspace);
> > + ret = -ENOSPC;
>
> The (vma->iova != iova) means that vma is already set, but to a
> different address. Is -ENOSPC really appropriate here? -EBUSY or -EINVAL
> looks more natural to me.

yeah, -EBUSY is better

> > + }
> > + }
> > + msm_gem_unlock(obj);
> > +
> > + return ret;
> > +}
> > +
> >  /*
> >   * Unpin a iova by updating the reference counts. The memory isn't actually
> >   * purged until something else (shrinker, mm_notifier, destroy, etc) 
> > decides
> > diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> > index 38d66e1248b1..efa2e5c19f1e 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.h
> > +++ b/drivers/gpu/drm/msm/msm_gem.h
> > @@ -38,6 +38,12 @@ struct msm_gem_address_space {
> >
> >   /* @faults: the number of GPU hangs associated with this address 
> > space */
> >   int faults;
> > +
> > + /** @va_start: lowest possible address to allocate */
> > + uint64_t va_start;
> > +
> > + /** @va_size: the size of the address space (in bytes) */
> > + uint64_t va_size;
> >  };
> >
> >  struct msm_gem_address_space *
> > @@ -144,6 +150,8 @@ struct msm_gem_vma *msm_gem_get_vma_locked(struct 
> > drm_gem_object *obj,
> >  struct msm_gem_address_space 
> > *aspace);
> >  int msm_gem_get_iova(struct drm_gem_object *obj,
> >   struct msm_gem_address_space *aspace, uint64_t *iova);
> > +int msm_gem_set_iova(struct drm_gem_object *obj,
> > + struct msm_gem_address_space *aspace, uint64_t iova);
> >  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
> >   struct msm_gem_address_space *aspace, uint64_t *iova,
> >   u64 range_start, u64 range_end);
> nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
> :) The uint64_t variant shouldn't be used by kernel code in general and
> checkpatch should want about it.

one of many things that I disagree with checkpatch about ;-)

I prefer standard types to custom ones.  I _kinda_ get the argument in
case of uapi (but IMHO that doesn't apply to how drm uapi headers are
used)

BR,
-R


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Dmitry Osipenko


On 3/31/22 21:52, Dmitry Osipenko wrote:
> ...
>> +/*
>> + * Get the requested iova but don't pin it.  Fails if the requested iova is
>> + * not available.  Doesn't need a put because iovas are currently valid for
>> + * the life of the object.
>> + *
>> + * Setting an iova of zero will clear the vma.
>> + */
>> +int msm_gem_set_iova(struct drm_gem_object *obj,
>> + struct msm_gem_address_space *aspace, uint64_t iova)
>> +{
>> +int ret = 0;
> 
> nit: No need to initialize the ret
> 
>> +msm_gem_lock(obj);
>> +if (!iova) {
>> +ret = clear_iova(obj, aspace);
>> +} else {
>> +struct msm_gem_vma *vma;
>> +vma = get_vma_locked(obj, aspace, iova, iova + obj->size);
>> +if (IS_ERR(vma)) {
>> +ret = PTR_ERR(vma);
>> +} else if (GEM_WARN_ON(vma->iova != iova)) {
>> +clear_iova(obj, aspace);
>> +ret = -ENOSPC;
> 
> The (vma->iova != iova) means that vma is already set, but to a
> different address. Is -ENOSPC really appropriate here? -EBUSY or -EINVAL
> looks more natural to me.
> 
>> +}
>> +}
>> +msm_gem_unlock(obj);
>> +
>> +return ret;
>> +}
>> +
>>  /*
>>   * Unpin a iova by updating the reference counts. The memory isn't actually
>>   * purged until something else (shrinker, mm_notifier, destroy, etc) decides
>> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
>> index 38d66e1248b1..efa2e5c19f1e 100644
>> --- a/drivers/gpu/drm/msm/msm_gem.h
>> +++ b/drivers/gpu/drm/msm/msm_gem.h
>> @@ -38,6 +38,12 @@ struct msm_gem_address_space {
>>  
>>  /* @faults: the number of GPU hangs associated with this address space 
>> */
>>  int faults;
>> +
>> +/** @va_start: lowest possible address to allocate */
>> +uint64_t va_start;
>> +
>> +/** @va_size: the size of the address space (in bytes) */
>> +uint64_t va_size;
>>  };
>>  
>>  struct msm_gem_address_space *
>> @@ -144,6 +150,8 @@ struct msm_gem_vma *msm_gem_get_vma_locked(struct 
>> drm_gem_object *obj,
>> struct msm_gem_address_space 
>> *aspace);
>>  int msm_gem_get_iova(struct drm_gem_object *obj,
>>  struct msm_gem_address_space *aspace, uint64_t *iova);
>> +int msm_gem_set_iova(struct drm_gem_object *obj,
>> +struct msm_gem_address_space *aspace, uint64_t iova);
>>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
>>  struct msm_gem_address_space *aspace, uint64_t *iova,
>>  u64 range_start, u64 range_end);
> nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
> :) The uint64_t variant shouldn't be used by kernel code in general and
> checkpatch should want about it.

s/want/warn/


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Dmitry Osipenko
...
> +/*
> + * Get the requested iova but don't pin it.  Fails if the requested iova is
> + * not available.  Doesn't need a put because iovas are currently valid for
> + * the life of the object.
> + *
> + * Setting an iova of zero will clear the vma.
> + */
> +int msm_gem_set_iova(struct drm_gem_object *obj,
> +  struct msm_gem_address_space *aspace, uint64_t iova)
> +{
> + int ret = 0;

nit: No need to initialize the ret

> + msm_gem_lock(obj);
> + if (!iova) {
> + ret = clear_iova(obj, aspace);
> + } else {
> + struct msm_gem_vma *vma;
> + vma = get_vma_locked(obj, aspace, iova, iova + obj->size);
> + if (IS_ERR(vma)) {
> + ret = PTR_ERR(vma);
> + } else if (GEM_WARN_ON(vma->iova != iova)) {
> + clear_iova(obj, aspace);
> + ret = -ENOSPC;

The (vma->iova != iova) means that vma is already set, but to a
different address. Is -ENOSPC really appropriate here? -EBUSY or -EINVAL
looks more natural to me.

> + }
> + }
> + msm_gem_unlock(obj);
> +
> + return ret;
> +}
> +
>  /*
>   * Unpin a iova by updating the reference counts. The memory isn't actually
>   * purged until something else (shrinker, mm_notifier, destroy, etc) decides
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index 38d66e1248b1..efa2e5c19f1e 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -38,6 +38,12 @@ struct msm_gem_address_space {
>  
>   /* @faults: the number of GPU hangs associated with this address space 
> */
>   int faults;
> +
> + /** @va_start: lowest possible address to allocate */
> + uint64_t va_start;
> +
> + /** @va_size: the size of the address space (in bytes) */
> + uint64_t va_size;
>  };
>  
>  struct msm_gem_address_space *
> @@ -144,6 +150,8 @@ struct msm_gem_vma *msm_gem_get_vma_locked(struct 
> drm_gem_object *obj,
>  struct msm_gem_address_space 
> *aspace);
>  int msm_gem_get_iova(struct drm_gem_object *obj,
>   struct msm_gem_address_space *aspace, uint64_t *iova);
> +int msm_gem_set_iova(struct drm_gem_object *obj,
> + struct msm_gem_address_space *aspace, uint64_t iova);
>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
>   struct msm_gem_address_space *aspace, uint64_t *iova,
>   u64 range_start, u64 range_end);
nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
:) The uint64_t variant shouldn't be used by kernel code in general and
checkpatch should want about it.


[Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-30 Thread Rob Clark
From: Rob Clark 

The motivation at this point is mainly native userspace mesa driver in a
VM guest.  The one remaining synchronous "hotpath" is buffer allocation,
because guest needs to wait to know the bo's iova before it can start
emitting cmdstream/state that references the new bo.  By allocating the
iova in the guest userspace, we no longer need to wait for a response
from the host, but can just rely on the allocation request being
processed before the cmdstream submission.  Allocation failures (OoM,
etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.

Signed-off-by: Rob Clark 
Reviewed-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++
 drivers/gpu/drm/msm/msm_drv.c   | 21 +++
 drivers/gpu/drm/msm/msm_gem.c   | 48 +
 drivers/gpu/drm/msm/msm_gem.h   |  8 +
 drivers/gpu/drm/msm/msm_gem_vma.c   |  2 ++
 include/uapi/drm/msm_drm.h  |  3 ++
 6 files changed, 92 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 6385ab06632f..4caae0229518 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -281,6 +281,16 @@ int adreno_get_param(struct msm_gpu *gpu, struct 
msm_file_private *ctx,
case MSM_PARAM_SUSPENDS:
*value = gpu->suspend_count;
return 0;
+   case MSM_PARAM_VA_START:
+   if (ctx->aspace == gpu->aspace)
+   return -EINVAL;
+   *value = ctx->aspace->va_start;
+   return 0;
+   case MSM_PARAM_VA_SIZE:
+   if (ctx->aspace == gpu->aspace)
+   return -EINVAL;
+   *value = ctx->aspace->va_size;
+   return 0;
default:
DBG("%s: invalid param: %u", gpu->name, param);
return -EINVAL;
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index a5eed5738ac8..45523b4123eb 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -719,6 +719,23 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
return msm_gem_get_iova(obj, ctx->aspace, iova);
 }
 
+static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
+   struct drm_file *file, struct drm_gem_object *obj,
+   uint64_t iova)
+{
+   struct msm_drm_private *priv = dev->dev_private;
+   struct msm_file_private *ctx = file->driver_priv;
+
+   if (!priv->gpu)
+   return -EINVAL;
+
+   /* Only supported if per-process address space is supported: */
+   if (priv->gpu->aspace == ctx->aspace)
+   return -EOPNOTSUPP;
+
+   return msm_gem_set_iova(obj, ctx->aspace, iova);
+}
+
 static int msm_ioctl_gem_info(struct drm_device *dev, void *data,
struct drm_file *file)
 {
@@ -733,6 +750,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void 
*data,
switch (args->info) {
case MSM_INFO_GET_OFFSET:
case MSM_INFO_GET_IOVA:
+   case MSM_INFO_SET_IOVA:
/* value returned as immediate, not pointer, so len==0: */
if (args->len)
return -EINVAL;
@@ -757,6 +775,9 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void 
*data,
case MSM_INFO_GET_IOVA:
ret = msm_ioctl_gem_info_iova(dev, file, obj, >value);
break;
+   case MSM_INFO_SET_IOVA:
+   ret = msm_ioctl_gem_info_set_iova(dev, file, obj, args->value);
+   break;
case MSM_INFO_SET_NAME:
/* length check should leave room for terminating null: */
if (args->len >= sizeof(msm_obj->name)) {
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index bf4af17e2f1e..83ded2b7154e 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -525,6 +525,54 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
return ret;
 }
 
+static int clear_iova(struct drm_gem_object *obj,
+ struct msm_gem_address_space *aspace)
+{
+   struct msm_gem_vma *vma = lookup_vma(obj, aspace);
+
+   if (!vma)
+   return 0;
+
+   if (msm_gem_vma_inuse(vma))
+   return -EBUSY;
+
+   msm_gem_purge_vma(vma->aspace, vma);
+   msm_gem_close_vma(vma->aspace, vma);
+   del_vma(vma);
+
+   return 0;
+}
+
+/*
+ * Get the requested iova but don't pin it.  Fails if the requested iova is
+ * not available.  Doesn't need a put because iovas are currently valid for
+ * the life of the object.
+ *
+ * Setting an iova of zero will clear the vma.
+ */
+int msm_gem_set_iova(struct drm_gem_object *obj,
+struct msm_gem_address_space *aspace, uint64_t iova)
+{
+   int ret = 0;
+
+