Re: [PATCH 00/13] drm: Fix reservation locking for pin/unpin and console

2024-03-05 Thread Dmitry Osipenko
On 2/27/24 13:14, Thomas Zimmermann wrote:
> Dma-buf locking semantics require the caller of pin and unpin to hold
> the buffer's reservation lock. Fix DRM to adhere to the specs. This
> enables to fix the locking in DRM's console emulation. Similar changes
> for vmap and mmap have been posted at [1][2]
> 
> Most DRM drivers and memory managers acquire the buffer object's
> reservation lock within their GEM pin and unpin callbacks. This
> violates dma-buf locking semantics. We get away with it because PRIME
> does not provide pin/unpin, but attach/detach, for which the locking
> semantics is correct.
> 
> Patches 1 to 8 rework DRM GEM code in various implementations to
> acquire the reservation lock when entering the pin and unpin callbacks.
> This prepares them for the next patch. Drivers that are not affected
> by these patches either don't acquire the reservation lock (amdgpu)
> or don't need preparation (loongson).
> 
> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
> internally it still gets the reservation lock.
> 
> With the updated GEM callbacks, the rest of the patchset fixes the
> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
> GEM buffer object inplace while updating its content. This required
> a implicit pinning and apparently amdgpu didn't do this at all.
> 
> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
> The former function map a GEM buffer into the kernel's address space
> with regular vmap operations, but keeps holding the reservation lock.
> The _vunmap_local() helper undoes the vmap and releases the lock. The
> updated GEM callbacks make this possible. Between the two calls, the
> fbdev emulation can update the buffer content without have the buffer
> moved or evicted. Update fbdev-generic to use vmap_local helpers,
> which fix amdgpu. The idea of adding a "local vmap" has previously been
> attempted at [3] in a different form.
> 
> Patch 11 adds implicit pinning to the DRM client's regular vmap
> helper so that long-term vmap'ed buffers won't be evicted. This only
> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
> there are no practical changes.
> 
> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
> operations in gem-vram and qxl. These pin operations are not supposed
> to be part of vmap code, but were required to keep the buffers in place
> for fbdev emulation. With the conversion o ffbdev-generic to to
> vmap_local helpers, that code can finally be removed.
> 
> Tested with amdgpu, nouveau, radeon, simpledrm and vc4.
> 
> [1] https://patchwork.freedesktop.org/series/106371/
> [2] https://patchwork.freedesktop.org/series/116001/
> [3] https://patchwork.freedesktop.org/series/84732/
> 
> Thomas Zimmermann (13):
>   drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/msm: Provide msm_gem_get_pages_locked()
>   drm/msm: Acquire reservation lock in GEM pin/unpin callback
>   drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
>   drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
>   drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
>   drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
>   drm/client: Pin vmap'ed GEM buffers
>   drm/gem-vram: Do not pin buffer objects for vmap
>   drm/qxl: Do not pin buffer objects for vmap

The patches look good. I gave them fbtest on virtio-gpu, no problems
spotted.

Reviewed-by: Dmitry Osipenko 
Tested-by: Dmitry Osipenko  # virtio-gpu

-- 
Best regards,
Dmitry



Re: [PATCH 00/13] drm: Fix reservation locking for pin/unpin and console

2024-03-01 Thread Dmitry Osipenko
On 2/28/24 11:19, Thomas Zimmermann wrote:
> Hi
> 
> Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
>> Hello,
>>
>> Thank you for the patches!
>>
>> On 2/27/24 13:14, Thomas Zimmermann wrote:
>>> Dma-buf locking semantics require the caller of pin and unpin to hold
>>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>>> enables to fix the locking in DRM's console emulation. Similar changes
>>> for vmap and mmap have been posted at [1][2]
>>>
>>> Most DRM drivers and memory managers acquire the buffer object's
>>> reservation lock within their GEM pin and unpin callbacks. This
>>> violates dma-buf locking semantics. We get away with it because PRIME
>>> does not provide pin/unpin, but attach/detach, for which the locking
>>> semantics is correct.
>>>
>>> Patches 1 to 8 rework DRM GEM code in various implementations to
>>> acquire the reservation lock when entering the pin and unpin callbacks.
>>> This prepares them for the next patch. Drivers that are not affected
>>> by these patches either don't acquire the reservation lock (amdgpu)
>>> or don't need preparation (loongson).
>>>
>>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>>> internally it still gets the reservation lock.
>>>
>>> With the updated GEM callbacks, the rest of the patchset fixes the
>>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>>> GEM buffer object inplace while updating its content. This required
>>> a implicit pinning and apparently amdgpu didn't do this at all.
>>>
>>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>>> The former function map a GEM buffer into the kernel's address space
>>> with regular vmap operations, but keeps holding the reservation lock.
>>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>>> updated GEM callbacks make this possible. Between the two calls, the
>>> fbdev emulation can update the buffer content without have the buffer
>>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>>> attempted at [3] in a different form.
>>>
>>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>>> helper so that long-term vmap'ed buffers won't be evicted. This only
>>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>>> there are no practical changes.
>>>
>>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>>> operations in gem-vram and qxl. These pin operations are not supposed
>>> to be part of vmap code, but were required to keep the buffers in place
>>> for fbdev emulation. With the conversion o ffbdev-generic to to
>>> vmap_local helpers, that code can finally be removed.
>> Isn't it a common behaviour for all DRM drivers to implicitly pin BO
>> while it's vmapped? I was sure it should be common /o\
> 
> That's what I originally thought as well, but the intention is for pin
> and vmap to be distinct operation. So far each driver has been
> different, as you probably know best from your vmap refactoring. :)
> 
>>
>> Why would you want to kmap BO that isn't pinned?
> 
> Pinning places the buffer object for the GPU. As a side effect, the
> buffer is then kept in place, which enables vmap. So pinning only makes
> sense for buffer objects that never move (shmem, dma). That's what patch
> 11 is for.
> 
>>
>> Shouldn't TTM's vmap() be changed to do the pinning?
> 
> I don't think so. One problem is that pinning needs a memory area (vram,
> GTT, system ram, etc) specified, which vmap simply doesn't know about.
> That has been a problem for fbdev emulation at some point. Our fbdev
> code tried to pin as part of vmap, but chose the wrong area and suddenly
> the GPU could not see the buffer object any longer.  So the next best
> thing for vmap was to pin the buffer object where ever it is currently
> located. That is what gem-vram and qxl did so far. And of course, the
> fbdev code needs to unpin and vunmap the buffer object quickly, so that
> it can be relocated if the GPU needs it.  Hence, the vmap_local
> interface removes such short-term pinning in favor of holding the
> reservation lock.
> 
>>
>> I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
>> It should be a rather serious problem requiring backporting of the
>> fixes, but I don't see the fixes tags on the patches (?)
> 
> No chance TBH. The old code has worked for years and backporting all
> this would require your vmap patches at a minimum.
> 
> Except maybe for amdgpu. It uses fbdev-generic, which requires pinning,
> but amdgpu doesn't pin. That looks fishy, but I'm not aware of any bug
> reports either. I guess, a quick workaround could fix older amdgpu if
> necessary.

Thanks! I'll make another pass on the patches on Monday

-- 
Best regards,
Dmitry



Re: [PATCH 00/13] drm: Fix reservation locking for pin/unpin and console

2024-02-27 Thread Dmitry Osipenko
Hello,

Thank you for the patches!

On 2/27/24 13:14, Thomas Zimmermann wrote:
> Dma-buf locking semantics require the caller of pin and unpin to hold
> the buffer's reservation lock. Fix DRM to adhere to the specs. This
> enables to fix the locking in DRM's console emulation. Similar changes
> for vmap and mmap have been posted at [1][2]
> 
> Most DRM drivers and memory managers acquire the buffer object's
> reservation lock within their GEM pin and unpin callbacks. This
> violates dma-buf locking semantics. We get away with it because PRIME
> does not provide pin/unpin, but attach/detach, for which the locking
> semantics is correct.
> 
> Patches 1 to 8 rework DRM GEM code in various implementations to
> acquire the reservation lock when entering the pin and unpin callbacks.
> This prepares them for the next patch. Drivers that are not affected
> by these patches either don't acquire the reservation lock (amdgpu)
> or don't need preparation (loongson).
> 
> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
> internally it still gets the reservation lock.
> 
> With the updated GEM callbacks, the rest of the patchset fixes the
> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
> GEM buffer object inplace while updating its content. This required
> a implicit pinning and apparently amdgpu didn't do this at all.
> 
> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
> The former function map a GEM buffer into the kernel's address space
> with regular vmap operations, but keeps holding the reservation lock.
> The _vunmap_local() helper undoes the vmap and releases the lock. The
> updated GEM callbacks make this possible. Between the two calls, the
> fbdev emulation can update the buffer content without have the buffer
> moved or evicted. Update fbdev-generic to use vmap_local helpers,
> which fix amdgpu. The idea of adding a "local vmap" has previously been
> attempted at [3] in a different form.
> 
> Patch 11 adds implicit pinning to the DRM client's regular vmap
> helper so that long-term vmap'ed buffers won't be evicted. This only
> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
> there are no practical changes.
> 
> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
> operations in gem-vram and qxl. These pin operations are not supposed
> to be part of vmap code, but were required to keep the buffers in place
> for fbdev emulation. With the conversion o ffbdev-generic to to
> vmap_local helpers, that code can finally be removed.

Isn't it a common behaviour for all DRM drivers to implicitly pin BO
while it's vmapped? I was sure it should be common /o\

Why would you want to kmap BO that isn't pinned?

Shouldn't TTM's vmap() be changed to do the pinning?

I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
It should be a rather serious problem requiring backporting of the
fixes, but I don't see the fixes tags on the patches (?)

-- 
Best regards,
Dmitry



Re: [PATCH v2] drm/msm/gem: Fix double resv lock aquire

2024-01-31 Thread Dmitry Osipenko
On 1/31/24 04:15, Rob Clark wrote:
> From: Rob Clark 
> 
> Since commit 56e5abba8c3e ("dma-buf: Add unlocked variant of vmapping
> functions"), the resv lock is already held in the prime vmap path, so
> don't try to grab it again.
> 
> v2: This applies to vunmap path as well
> 
> Fixes: 56e5abba8c3e ("dma-buf: Add unlocked variant of vmapping functions")
> Signed-off-by: Rob Clark 

Nit: the offending commit should be 79e2cf2e7a19 ("drm/gem: Take
reservation lock for vmap/vunmap operations")

> ---
>  drivers/gpu/drm/msm/msm_gem_prime.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c 
> b/drivers/gpu/drm/msm/msm_gem_prime.c
> index 5f68e31a3e4e..0915f3b68752 100644
> --- a/drivers/gpu/drm/msm/msm_gem_prime.c
> +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
> @@ -26,7 +26,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct 
> iosys_map *map)
>  {
>   void *vaddr;
>  
> - vaddr = msm_gem_get_vaddr(obj);
> + vaddr = msm_gem_get_vaddr_locked(obj);
>   if (IS_ERR(vaddr))
>   return PTR_ERR(vaddr);
>   iosys_map_set_vaddr(map, vaddr);
> @@ -36,7 +36,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct 
> iosys_map *map)
>  
>  void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
>  {
> - msm_gem_put_vaddr(obj);
> + msm_gem_put_vaddr_locked(obj);
>  }
>  
>  struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,

Looks good otherwise

-- 
Best regards,
Dmitry



Re: [Freedreno] [PATCH -next 5/7] drm/virtio: Remove an unnecessary NULL value

2023-08-12 Thread Dmitry Osipenko
On 8/9/23 06:44, Ruan Jinjie wrote:
> The NULL initialization of the pointer assigned by kzalloc() first is
> not necessary, because if the kzalloc() failed, the pointer will be
> assigned NULL, otherwise it works as usual. so remove it.
> 
> Signed-off-by: Ruan Jinjie 
> ---
>  drivers/gpu/drm/virtio/virtgpu_submit.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c 
> b/drivers/gpu/drm/virtio/virtgpu_submit.c
> index 3c00135ead45..82563dbec2ab 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
> @@ -274,7 +274,7 @@ static int virtio_gpu_fence_event_create(struct 
> drm_device *dev,
>struct virtio_gpu_fence *fence,
>u32 ring_idx)
>  {
> - struct virtio_gpu_fence_event *e = NULL;
> + struct virtio_gpu_fence_event *e;
>   int ret;
>  
>   e = kzalloc(sizeof(*e), GFP_KERNEL);

Reviewed-by: Dmitry Osipenko 

-- 
Best regards,
Dmitry



Re: [Freedreno] [PATCH] drm/msm: Fix potential invalid ptr free

2023-02-16 Thread Dmitry Osipenko
On 2/16/23 02:50, Rob Clark wrote:
> From: Rob Clark 
> 
> The error path cleanup expects that chain and syncobj are either NULL or
> valid pointers.  But post_deps was not allocated with __GFP_ZERO.
> 
> Fixes: ab723b7a992a ("drm/msm: Add syncobj support.")
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/msm_gem_submit.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c 
> b/drivers/gpu/drm/msm/msm_gem_submit.c
> index 6503220e5a4b..e4d13540300e 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -640,8 +640,8 @@ static struct msm_submit_post_dep 
> *msm_parse_post_deps(struct drm_device *dev,
>   int ret = 0;
>   uint32_t i, j;
>  
> - post_deps = kmalloc_array(nr_syncobjs, sizeof(*post_deps),
> -   GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
> + post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps),
> + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
>   if (!post_deps)
>   return ERR_PTR(-ENOMEM);
>  
> @@ -656,7 +656,6 @@ static struct msm_submit_post_dep 
> *msm_parse_post_deps(struct drm_device *dev,
>   }
>  
>   post_deps[i].point = syncobj_desc.point;
> - post_deps[i].chain = NULL;
>  
>   if (syncobj_desc.flags) {
>   ret = -EINVAL;

Good catch!

Reviewed-by: Dmitry Osipenko 

-- 
Best regards,
Dmitry



Re: [Freedreno] [PATCH v3 09/15] drm/gem: Add LRU/shrinker helper

2022-08-01 Thread Dmitry Osipenko
On 8/1/22 23:42, Rob Clark wrote:
> On Mon, Aug 1, 2022 at 1:26 PM Dmitry Osipenko
>  wrote:
>>
>> On 8/1/22 23:13, Dmitry Osipenko wrote:
>>> On 8/1/22 23:11, Dmitry Osipenko wrote:
>>>> On 8/1/22 23:00, Rob Clark wrote:
>>>>> On Mon, Aug 1, 2022 at 12:41 PM Dmitry Osipenko
>>>>>  wrote:
>>>>>>
>>>>>> On 7/26/22 20:50, Rob Clark wrote:
>>>>>>> +/**
>>>>>>> + * drm_gem_lru_remove - remove object from whatever LRU it is in
>>>>>>> + *
>>>>>>> + * If the object is currently in any LRU, remove it.
>>>>>>> + *
>>>>>>> + * @obj: The GEM object to remove from current LRU
>>>>>>> + */
>>>>>>> +void
>>>>>>> +drm_gem_lru_remove(struct drm_gem_object *obj)
>>>>>>> +{
>>>>>>> + struct drm_gem_lru *lru = obj->lru;
>>>>>>> +
>>>>>>> + if (!lru)
>>>>>>> + return;
>>>>>>> +
>>>>>>> + mutex_lock(lru->lock);
>>>>>>> + lru_remove(obj);
>>>>>>> + mutex_unlock(lru->lock);
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(drm_gem_lru_remove);
>>>>>>
>>>>>> I made a preliminary port of the DRM-SHMEM shrinker on top of the the
>>>>>> latest version of dma-buf locking convention and yours LRU patches. It
>>>>>> all works good, the only thing that is missing for the DRM-SHMEM
>>>>>> shrinker is the drm_gem_lru_remove_locked().
>>>>>>
>>>>>> What about to add a locked variant of drm_gem_lru_remove()?
>>>>>
>>>>> Sounds fine to me.. the only reason it didn't exist yet was because it
>>>>> wasn't needed yet..
>>>>
>>>> There is no use for the drm_gem_lru_move_tail_locked() as well, you're
>>>> not using it in the MSM driver. Hence I thought it might be good to add
>>>> the drm_gem_lru_remove_locked(), or maybe the
>>>> drm_gem_lru_move_tail_locked() should be dropped then?
>>>>
>>>>> I can respin w/ an addition of a _locked() version, or you can add it
>>>>> on top in your patchset.  Either is fine by me
>>>>
>>>> The either option is fine by me too. If you'll keep the unused
>>>> drm_gem_lru_move_tail_locked(), then will be nice to add
>>>> drm_gem_lru_remove_locked().
>>>>
>>>
>>> The drm_gem_lru_move_tail_locked() will be needed by DRM-SHMEM shrinker,
>>> BTW.
>>
>> On the other hand, I see now that DRM-SHMEM shrinker can use the
>> unlocked versions only. Hence both drm_gem_lru_move_tail_locked() and
>> drm_gem_lru_remove_locked() aren't needed.
> 
> drm_gem_lru_move_tail_locked() is used internally, but I guess it
> could be made static since there ended up not being external users
> (yet?)

Making it static will be good.

> I could see _move_tail_locked() being useful for a driver that wanted
> to bulk update a bunch of GEM objs, for ex. all the bo's associated
> with a submit/job.

At minimum we shouldn't expose the unused kernel symbols. But if you're
planning to make use of this function later on, then it might be fine to
add it.

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v3 09/15] drm/gem: Add LRU/shrinker helper

2022-08-01 Thread Dmitry Osipenko
On 8/1/22 23:13, Dmitry Osipenko wrote:
> On 8/1/22 23:11, Dmitry Osipenko wrote:
>> On 8/1/22 23:00, Rob Clark wrote:
>>> On Mon, Aug 1, 2022 at 12:41 PM Dmitry Osipenko
>>>  wrote:
>>>>
>>>> On 7/26/22 20:50, Rob Clark wrote:
>>>>> +/**
>>>>> + * drm_gem_lru_remove - remove object from whatever LRU it is in
>>>>> + *
>>>>> + * If the object is currently in any LRU, remove it.
>>>>> + *
>>>>> + * @obj: The GEM object to remove from current LRU
>>>>> + */
>>>>> +void
>>>>> +drm_gem_lru_remove(struct drm_gem_object *obj)
>>>>> +{
>>>>> + struct drm_gem_lru *lru = obj->lru;
>>>>> +
>>>>> + if (!lru)
>>>>> + return;
>>>>> +
>>>>> + mutex_lock(lru->lock);
>>>>> + lru_remove(obj);
>>>>> + mutex_unlock(lru->lock);
>>>>> +}
>>>>> +EXPORT_SYMBOL(drm_gem_lru_remove);
>>>>
>>>> I made a preliminary port of the DRM-SHMEM shrinker on top of the the
>>>> latest version of dma-buf locking convention and yours LRU patches. It
>>>> all works good, the only thing that is missing for the DRM-SHMEM
>>>> shrinker is the drm_gem_lru_remove_locked().
>>>>
>>>> What about to add a locked variant of drm_gem_lru_remove()?
>>>
>>> Sounds fine to me.. the only reason it didn't exist yet was because it
>>> wasn't needed yet..
>>
>> There is no use for the drm_gem_lru_move_tail_locked() as well, you're
>> not using it in the MSM driver. Hence I thought it might be good to add
>> the drm_gem_lru_remove_locked(), or maybe the
>> drm_gem_lru_move_tail_locked() should be dropped then?
>>
>>> I can respin w/ an addition of a _locked() version, or you can add it
>>> on top in your patchset.  Either is fine by me
>>
>> The either option is fine by me too. If you'll keep the unused
>> drm_gem_lru_move_tail_locked(), then will be nice to add
>> drm_gem_lru_remove_locked().
>>
> 
> The drm_gem_lru_move_tail_locked() will be needed by DRM-SHMEM shrinker,
> BTW.

On the other hand, I see now that DRM-SHMEM shrinker can use the
unlocked versions only. Hence both drm_gem_lru_move_tail_locked() and
drm_gem_lru_remove_locked() aren't needed.

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v3 09/15] drm/gem: Add LRU/shrinker helper

2022-08-01 Thread Dmitry Osipenko
On 8/1/22 23:11, Dmitry Osipenko wrote:
> On 8/1/22 23:00, Rob Clark wrote:
>> On Mon, Aug 1, 2022 at 12:41 PM Dmitry Osipenko
>>  wrote:
>>>
>>> On 7/26/22 20:50, Rob Clark wrote:
>>>> +/**
>>>> + * drm_gem_lru_remove - remove object from whatever LRU it is in
>>>> + *
>>>> + * If the object is currently in any LRU, remove it.
>>>> + *
>>>> + * @obj: The GEM object to remove from current LRU
>>>> + */
>>>> +void
>>>> +drm_gem_lru_remove(struct drm_gem_object *obj)
>>>> +{
>>>> + struct drm_gem_lru *lru = obj->lru;
>>>> +
>>>> + if (!lru)
>>>> + return;
>>>> +
>>>> + mutex_lock(lru->lock);
>>>> + lru_remove(obj);
>>>> + mutex_unlock(lru->lock);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_lru_remove);
>>>
>>> I made a preliminary port of the DRM-SHMEM shrinker on top of the the
>>> latest version of dma-buf locking convention and yours LRU patches. It
>>> all works good, the only thing that is missing for the DRM-SHMEM
>>> shrinker is the drm_gem_lru_remove_locked().
>>>
>>> What about to add a locked variant of drm_gem_lru_remove()?
>>
>> Sounds fine to me.. the only reason it didn't exist yet was because it
>> wasn't needed yet..
> 
> There is no use for the drm_gem_lru_move_tail_locked() as well, you're
> not using it in the MSM driver. Hence I thought it might be good to add
> the drm_gem_lru_remove_locked(), or maybe the
> drm_gem_lru_move_tail_locked() should be dropped then?
> 
>> I can respin w/ an addition of a _locked() version, or you can add it
>> on top in your patchset.  Either is fine by me
> 
> The either option is fine by me too. If you'll keep the unused
> drm_gem_lru_move_tail_locked(), then will be nice to add
> drm_gem_lru_remove_locked().
> 

The drm_gem_lru_move_tail_locked() will be needed by DRM-SHMEM shrinker,
BTW.

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v3 09/15] drm/gem: Add LRU/shrinker helper

2022-08-01 Thread Dmitry Osipenko
On 8/1/22 23:00, Rob Clark wrote:
> On Mon, Aug 1, 2022 at 12:41 PM Dmitry Osipenko
>  wrote:
>>
>> On 7/26/22 20:50, Rob Clark wrote:
>>> +/**
>>> + * drm_gem_lru_remove - remove object from whatever LRU it is in
>>> + *
>>> + * If the object is currently in any LRU, remove it.
>>> + *
>>> + * @obj: The GEM object to remove from current LRU
>>> + */
>>> +void
>>> +drm_gem_lru_remove(struct drm_gem_object *obj)
>>> +{
>>> + struct drm_gem_lru *lru = obj->lru;
>>> +
>>> + if (!lru)
>>> + return;
>>> +
>>> + mutex_lock(lru->lock);
>>> + lru_remove(obj);
>>> + mutex_unlock(lru->lock);
>>> +}
>>> +EXPORT_SYMBOL(drm_gem_lru_remove);
>>
>> I made a preliminary port of the DRM-SHMEM shrinker on top of the the
>> latest version of dma-buf locking convention and yours LRU patches. It
>> all works good, the only thing that is missing for the DRM-SHMEM
>> shrinker is the drm_gem_lru_remove_locked().
>>
>> What about to add a locked variant of drm_gem_lru_remove()?
> 
> Sounds fine to me.. the only reason it didn't exist yet was because it
> wasn't needed yet..

There is no use for the drm_gem_lru_move_tail_locked() as well, you're
not using it in the MSM driver. Hence I thought it might be good to add
the drm_gem_lru_remove_locked(), or maybe the
drm_gem_lru_move_tail_locked() should be dropped then?

> I can respin w/ an addition of a _locked() version, or you can add it
> on top in your patchset.  Either is fine by me

The either option is fine by me too. If you'll keep the unused
drm_gem_lru_move_tail_locked(), then will be nice to add
drm_gem_lru_remove_locked().

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v3 09/15] drm/gem: Add LRU/shrinker helper

2022-08-01 Thread Dmitry Osipenko
On 7/26/22 20:50, Rob Clark wrote:
> +/**
> + * drm_gem_lru_remove - remove object from whatever LRU it is in
> + *
> + * If the object is currently in any LRU, remove it.
> + *
> + * @obj: The GEM object to remove from current LRU
> + */
> +void
> +drm_gem_lru_remove(struct drm_gem_object *obj)
> +{
> + struct drm_gem_lru *lru = obj->lru;
> +
> + if (!lru)
> + return;
> +
> + mutex_lock(lru->lock);
> + lru_remove(obj);
> + mutex_unlock(lru->lock);
> +}
> +EXPORT_SYMBOL(drm_gem_lru_remove);

I made a preliminary port of the DRM-SHMEM shrinker on top of the the
latest version of dma-buf locking convention and yours LRU patches. It
all works good, the only thing that is missing for the DRM-SHMEM
shrinker is the drm_gem_lru_remove_locked().

What about to add a locked variant of drm_gem_lru_remove()?

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v3 09/15] drm/gem: Add LRU/shrinker helper

2022-07-29 Thread Dmitry Osipenko
On 7/29/22 18:40, Rob Clark wrote:
> On Fri, Jul 29, 2022 at 8:27 AM Dmitry Osipenko
>  wrote:
>>
>> On 7/26/22 20:50, Rob Clark wrote:
>>> +/**
>>> + * drm_gem_lru_move_tail_locked - move the object to the tail of the LRU
>>> + *
>>> + * If the object is already in this LRU it will be moved to the
>>> + * tail.  Otherwise it will be removed from whichever other LRU
>>> + * it is in (if any) and moved into this LRU.
>>> + *
>>> + * Call with LRU lock held.
>>> + *
>>> + * @lru: The LRU to move the object into.
>>> + * @obj: The GEM object to move into this LRU
>>> + */
>>> +void
>>> +drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct 
>>> drm_gem_object *obj)
>>> +{
>>> + lockdep_assert_held_once(lru->lock);
>>> +
>>> + if (obj->lru)
>>> + lru_remove(obj);
>>
>> The obj->lru also needs to be locked if lru != obj->lru, isn't it? And
>> then we should add lockdep_assert_held_once(obj->lru->lock).
>>
> 
> It is expected (mentioned in comment on drm_gem_lru::lock) that all
> lru's are sharing the same lock.  Possibly that could be made more
> obvious?  Having per-lru locks wouldn't really work for accessing the
> single drm_gem_object::lru_node.

Right, my bad. I began to update the DRM-SHMEM shrinker patches on top
of the shrinker helper, but missed that the lock is shared when was
looking at this patch again today.

Adding comment to the code about the shared lock may help a tad, but
it's not really necessary. It was my fault that I forgot about it.

Thank you!

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v2 09/13] drm/gem: Add LRU/shrinker helper

2022-07-19 Thread Dmitry Osipenko
On 7/19/22 20:18, Rob Clark wrote:
> +void
> +drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object 
> *obj)
> +{
> + WARN_ON(!mutex_is_locked(lru->lock));

Nit: What about lockdep_assert_held_once(>lock->base)) ?

Otherwise, looks good! I'll use it for the DRM-SHMEM shrinker after
completing the work on the dma-buf locks.

Reviewed-by: Dmitry Osipenko 

-- 
Best regards,
Dmitry


Re: [Freedreno] [PATCH v3 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-04-07 Thread Dmitry Osipenko
On 4/7/22 00:46, Rob Clark wrote:
> From: Rob Clark 
> 
> The motivation at this point is mainly native userspace mesa driver in a
> VM guest.  The one remaining synchronous "hotpath" is buffer allocation,
> because guest needs to wait to know the bo's iova before it can start
> emitting cmdstream/state that references the new bo.  By allocating the
> iova in the guest userspace, we no longer need to wait for a response
> from the host, but can just rely on the allocation request being
> processed before the cmdstream submission.  Allocation failures (OoM,
> etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
> or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.
> 
> v2: Fix inuse check
> v3: Change mismatched iova case to -EBUSY
> 
> Signed-off-by: Rob Clark 
> Reviewed-by: Dmitry Baryshkov 
> ---
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++
>  drivers/gpu/drm/msm/msm_drv.c   | 21 +++
>  drivers/gpu/drm/msm/msm_gem.c   | 48 +
>  drivers/gpu/drm/msm/msm_gem.h   |  8 +
>  drivers/gpu/drm/msm/msm_gem_vma.c   |  2 ++
>  include/uapi/drm/msm_drm.h  |  3 ++
>  6 files changed, 92 insertions(+)

Reviewed-by: Dmitry Osipenko 


Re: [Freedreno] [PATCH v2 07/10] drm/msm/gem: Rework vma lookup and pin

2022-03-31 Thread Dmitry Osipenko
On 3/31/22 21:58, Rob Clark wrote:
> On Thu, Mar 31, 2022 at 11:27 AM Dmitry Osipenko
>  wrote:
>>
>> On 3/30/22 23:47, Rob Clark wrote:
>>> From: Rob Clark 
>>>
>>> Combines duplicate vma lookup in the get_and_pin path.
>>>
>>> Signed-off-by: Rob Clark 
>>> ---
>>>  drivers/gpu/drm/msm/msm_gem.c | 50 ++-
>>>  1 file changed, 26 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>> index deafae6feaa8..218744a490a4 100644
>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>> @@ -376,39 +376,40 @@ put_iova_vmas(struct drm_gem_object *obj)
>>>   }
>>>  }
>>>
>>> -static int get_iova_locked(struct drm_gem_object *obj,
>>> - struct msm_gem_address_space *aspace, uint64_t *iova,
>>> +static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
>>> + struct msm_gem_address_space *aspace,
>>>   u64 range_start, u64 range_end)
>>>  {
>>>   struct msm_gem_vma *vma;
>>> - int ret = 0;
>>>
>>>   GEM_WARN_ON(!msm_gem_is_locked(obj));
>>>
>>>   vma = lookup_vma(obj, aspace);
>>>
>>>   if (!vma) {
>>> + int ret;
>>> +
>>>   vma = add_vma(obj, aspace);
>>>   if (IS_ERR(vma))
>>> - return PTR_ERR(vma);
>>> + return vma;
>>>
>>>   ret = msm_gem_init_vma(aspace, vma, obj->size,
>>>   range_start, range_end);
>>>   if (ret) {
>> You're allocation range_start -> range_end
>>
>>
>>>   del_vma(vma);
>>> - return ret;
>>> + return ERR_PTR(ret);
>>>   }
>>> + } else {
>>> + GEM_WARN_ON(vma->iova < range_start);
>>> + GEM_WARN_ON((vma->iova + obj->size) > range_end);
>>
>> and then comparing range_start -> range_start + obj->size, hence you're
>> assuming that range_end always equals to obj->size during the allocation.
>>
>> I'm not sure what is the idea here.. this looks inconsistent. I think
>> you wanted to write:
>>
>> GEM_WARN_ON(vma->iova < range_start);
>> GEM_WARN_ON(vma->iova + (vma->node.size << PAGE_SHIFT) > 
>> range_end);
>>
>> But is it really useful to check whether the new range is inside of the
>> old range? Shouldn't it be always a error to change the IOVA range
>> without reallocating vma?
> 
> There are a few cases (for allocations for GMU) where the range is
> larger than the bo.. see a6xx_gmu_memory_alloc()

Ahh, I didn't read the code properly and see now why you're using the
obj->size. It's the range where you want to put the BO. Looks good then.

Reviewed-by: Dmitry Osipenko 


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Dmitry Osipenko
On 3/31/22 22:02, Rob Clark wrote:
> On Thu, Mar 31, 2022 at 11:52 AM Dmitry Osipenko
>  wrote:
>>
>> ...
>>> +/*
>>> + * Get the requested iova but don't pin it.  Fails if the requested iova is
>>> + * not available.  Doesn't need a put because iovas are currently valid for
>>> + * the life of the object.
>>> + *
>>> + * Setting an iova of zero will clear the vma.
>>> + */
>>> +int msm_gem_set_iova(struct drm_gem_object *obj,
>>> +  struct msm_gem_address_space *aspace, uint64_t iova)
>>> +{
>>> + int ret = 0;
>>
>> nit: No need to initialize the ret
> 
> actually, we do

Indeed, sorry :)

...
>>>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
>>>   struct msm_gem_address_space *aspace, uint64_t *iova,
>>>   u64 range_start, u64 range_end);
>> nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
>> :) The uint64_t variant shouldn't be used by kernel code in general and
>> checkpatch should want about it.
> 
> one of many things that I disagree with checkpatch about ;-)
> 
> I prefer standard types to custom ones.  I _kinda_ get the argument in
> case of uapi (but IMHO that doesn't apply to how drm uapi headers are
> used)

I'd understand if it was all either uint64_t or u64, but the mix.. hm.


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Dmitry Osipenko


On 3/31/22 21:52, Dmitry Osipenko wrote:
> ...
>> +/*
>> + * Get the requested iova but don't pin it.  Fails if the requested iova is
>> + * not available.  Doesn't need a put because iovas are currently valid for
>> + * the life of the object.
>> + *
>> + * Setting an iova of zero will clear the vma.
>> + */
>> +int msm_gem_set_iova(struct drm_gem_object *obj,
>> + struct msm_gem_address_space *aspace, uint64_t iova)
>> +{
>> +int ret = 0;
> 
> nit: No need to initialize the ret
> 
>> +msm_gem_lock(obj);
>> +if (!iova) {
>> +ret = clear_iova(obj, aspace);
>> +} else {
>> +struct msm_gem_vma *vma;
>> +vma = get_vma_locked(obj, aspace, iova, iova + obj->size);
>> +if (IS_ERR(vma)) {
>> +ret = PTR_ERR(vma);
>> +} else if (GEM_WARN_ON(vma->iova != iova)) {
>> +clear_iova(obj, aspace);
>> +ret = -ENOSPC;
> 
> The (vma->iova != iova) means that vma is already set, but to a
> different address. Is -ENOSPC really appropriate here? -EBUSY or -EINVAL
> looks more natural to me.
> 
>> +}
>> +}
>> +msm_gem_unlock(obj);
>> +
>> +return ret;
>> +}
>> +
>>  /*
>>   * Unpin a iova by updating the reference counts. The memory isn't actually
>>   * purged until something else (shrinker, mm_notifier, destroy, etc) decides
>> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
>> index 38d66e1248b1..efa2e5c19f1e 100644
>> --- a/drivers/gpu/drm/msm/msm_gem.h
>> +++ b/drivers/gpu/drm/msm/msm_gem.h
>> @@ -38,6 +38,12 @@ struct msm_gem_address_space {
>>  
>>  /* @faults: the number of GPU hangs associated with this address space 
>> */
>>  int faults;
>> +
>> +/** @va_start: lowest possible address to allocate */
>> +uint64_t va_start;
>> +
>> +/** @va_size: the size of the address space (in bytes) */
>> +uint64_t va_size;
>>  };
>>  
>>  struct msm_gem_address_space *
>> @@ -144,6 +150,8 @@ struct msm_gem_vma *msm_gem_get_vma_locked(struct 
>> drm_gem_object *obj,
>> struct msm_gem_address_space 
>> *aspace);
>>  int msm_gem_get_iova(struct drm_gem_object *obj,
>>  struct msm_gem_address_space *aspace, uint64_t *iova);
>> +int msm_gem_set_iova(struct drm_gem_object *obj,
>> +struct msm_gem_address_space *aspace, uint64_t iova);
>>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
>>  struct msm_gem_address_space *aspace, uint64_t *iova,
>>  u64 range_start, u64 range_end);
> nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
> :) The uint64_t variant shouldn't be used by kernel code in general and
> checkpatch should want about it.

s/want/warn/


Re: [Freedreno] [PATCH v2 10/10] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-31 Thread Dmitry Osipenko
...
> +/*
> + * Get the requested iova but don't pin it.  Fails if the requested iova is
> + * not available.  Doesn't need a put because iovas are currently valid for
> + * the life of the object.
> + *
> + * Setting an iova of zero will clear the vma.
> + */
> +int msm_gem_set_iova(struct drm_gem_object *obj,
> +  struct msm_gem_address_space *aspace, uint64_t iova)
> +{
> + int ret = 0;

nit: No need to initialize the ret

> + msm_gem_lock(obj);
> + if (!iova) {
> + ret = clear_iova(obj, aspace);
> + } else {
> + struct msm_gem_vma *vma;
> + vma = get_vma_locked(obj, aspace, iova, iova + obj->size);
> + if (IS_ERR(vma)) {
> + ret = PTR_ERR(vma);
> + } else if (GEM_WARN_ON(vma->iova != iova)) {
> + clear_iova(obj, aspace);
> + ret = -ENOSPC;

The (vma->iova != iova) means that vma is already set, but to a
different address. Is -ENOSPC really appropriate here? -EBUSY or -EINVAL
looks more natural to me.

> + }
> + }
> + msm_gem_unlock(obj);
> +
> + return ret;
> +}
> +
>  /*
>   * Unpin a iova by updating the reference counts. The memory isn't actually
>   * purged until something else (shrinker, mm_notifier, destroy, etc) decides
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index 38d66e1248b1..efa2e5c19f1e 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -38,6 +38,12 @@ struct msm_gem_address_space {
>  
>   /* @faults: the number of GPU hangs associated with this address space 
> */
>   int faults;
> +
> + /** @va_start: lowest possible address to allocate */
> + uint64_t va_start;
> +
> + /** @va_size: the size of the address space (in bytes) */
> + uint64_t va_size;
>  };
>  
>  struct msm_gem_address_space *
> @@ -144,6 +150,8 @@ struct msm_gem_vma *msm_gem_get_vma_locked(struct 
> drm_gem_object *obj,
>  struct msm_gem_address_space 
> *aspace);
>  int msm_gem_get_iova(struct drm_gem_object *obj,
>   struct msm_gem_address_space *aspace, uint64_t *iova);
> +int msm_gem_set_iova(struct drm_gem_object *obj,
> + struct msm_gem_address_space *aspace, uint64_t iova);
>  int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
>   struct msm_gem_address_space *aspace, uint64_t *iova,
>   u64 range_start, u64 range_end);
nit: There is an odd mix of uint64_t and u64 (and alike) in the MSM code
:) The uint64_t variant shouldn't be used by kernel code in general and
checkpatch should want about it.


Re: [Freedreno] [PATCH v2 07/10] drm/msm/gem: Rework vma lookup and pin

2022-03-31 Thread Dmitry Osipenko
On 3/31/22 21:27, Dmitry Osipenko wrote:
> On 3/30/22 23:47, Rob Clark wrote:
>> From: Rob Clark 
>>
>> Combines duplicate vma lookup in the get_and_pin path.
>>
>> Signed-off-by: Rob Clark 
>> ---
>>  drivers/gpu/drm/msm/msm_gem.c | 50 ++-
>>  1 file changed, 26 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>> index deafae6feaa8..218744a490a4 100644
>> --- a/drivers/gpu/drm/msm/msm_gem.c
>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>> @@ -376,39 +376,40 @@ put_iova_vmas(struct drm_gem_object *obj)
>>  }
>>  }
>>  
>> -static int get_iova_locked(struct drm_gem_object *obj,
>> -struct msm_gem_address_space *aspace, uint64_t *iova,
>> +static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
>> +struct msm_gem_address_space *aspace,
>>  u64 range_start, u64 range_end)
>>  {
>>  struct msm_gem_vma *vma;
>> -int ret = 0;
>>  
>>  GEM_WARN_ON(!msm_gem_is_locked(obj));
>>  
>>  vma = lookup_vma(obj, aspace);
>>  
>>  if (!vma) {
>> +int ret;
>> +
>>  vma = add_vma(obj, aspace);
>>  if (IS_ERR(vma))
>> -return PTR_ERR(vma);
>> +return vma;
>>  
>>  ret = msm_gem_init_vma(aspace, vma, obj->size,
>>  range_start, range_end);
>>  if (ret) {
> You're allocation range_start -> range_end

*allocating

> 
>>  del_vma(vma);
>> -return ret;
>> +return ERR_PTR(ret);
>>  }
>> +} else {
>> +GEM_WARN_ON(vma->iova < range_start);
>> +GEM_WARN_ON((vma->iova + obj->size) > range_end);
> 
> and then comparing range_start -> range_start + obj->size, hence you're
> assuming that range_end always equals to obj->size during the allocation.
> 
> I'm not sure what is the idea here.. this looks inconsistent. I think
> you wanted to write:
> 
>   GEM_WARN_ON(vma->iova < range_start);
>   GEM_WARN_ON(vma->iova + (vma->node.size << PAGE_SHIFT) > 
> range_end);
> 
> But is it really useful to check whether the new range is inside of the
> old range? Shouldn't it be always a error to change the IOVA range
> without reallocating vma?
> 
> I'd expect to see:
> 
>   GEM_WARN_ON(vma->iova != range_start);
>   GEM_WARN_ON(vma->iova + (vma->node.size << PAGE_SHIFT) != 
> range_end);
> 
> and then error out if range mismatches.



Re: [Freedreno] [PATCH v2 07/10] drm/msm/gem: Rework vma lookup and pin

2022-03-31 Thread Dmitry Osipenko
On 3/30/22 23:47, Rob Clark wrote:
> From: Rob Clark 
> 
> Combines duplicate vma lookup in the get_and_pin path.
> 
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/msm_gem.c | 50 ++-
>  1 file changed, 26 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index deafae6feaa8..218744a490a4 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -376,39 +376,40 @@ put_iova_vmas(struct drm_gem_object *obj)
>   }
>  }
>  
> -static int get_iova_locked(struct drm_gem_object *obj,
> - struct msm_gem_address_space *aspace, uint64_t *iova,
> +static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
> + struct msm_gem_address_space *aspace,
>   u64 range_start, u64 range_end)
>  {
>   struct msm_gem_vma *vma;
> - int ret = 0;
>  
>   GEM_WARN_ON(!msm_gem_is_locked(obj));
>  
>   vma = lookup_vma(obj, aspace);
>  
>   if (!vma) {
> + int ret;
> +
>   vma = add_vma(obj, aspace);
>   if (IS_ERR(vma))
> - return PTR_ERR(vma);
> + return vma;
>  
>   ret = msm_gem_init_vma(aspace, vma, obj->size,
>   range_start, range_end);
>   if (ret) {
You're allocation range_start -> range_end


>   del_vma(vma);
> - return ret;
> + return ERR_PTR(ret);
>   }
> + } else {
> + GEM_WARN_ON(vma->iova < range_start);
> + GEM_WARN_ON((vma->iova + obj->size) > range_end);

and then comparing range_start -> range_start + obj->size, hence you're
assuming that range_end always equals to obj->size during the allocation.

I'm not sure what is the idea here.. this looks inconsistent. I think
you wanted to write:

GEM_WARN_ON(vma->iova < range_start);
GEM_WARN_ON(vma->iova + (vma->node.size << PAGE_SHIFT) > 
range_end);

But is it really useful to check whether the new range is inside of the
old range? Shouldn't it be always a error to change the IOVA range
without reallocating vma?

I'd expect to see:

GEM_WARN_ON(vma->iova != range_start);
GEM_WARN_ON(vma->iova + (vma->node.size << PAGE_SHIFT) != 
range_end);

and then error out if range mismatches.


Re: [Freedreno] [PATCH 9/9] drm/msm: Add a way for userspace to allocate GPU iova

2022-03-29 Thread Dmitry Osipenko


On 3/30/22 02:00, Rob Clark wrote:
> +static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> + struct drm_file *file, struct drm_gem_object *obj,
> + uint64_t iova)
> +{
> + struct msm_drm_private *priv = dev->dev_private;
> + struct msm_file_private *ctx = file->driver_priv;
> +
> + if (!priv->gpu)
> + return -EINVAL;
> +
> + /* Only supported if per-process address space is supported: */
> + if (priv->gpu->aspace == ctx->aspace)
> + return -EINVAL;

nit: -EOPNOTSUPP ?


Re: [Freedreno] [PATCH 00/31] Introduce devm_pm_opp_* API

2021-01-03 Thread Dmitry Osipenko
01.01.2021 19:54, Yangtao Li пишет:
> Hi,
> 
> This patchset add devm_pm_opp_set_clkname, devm_pm_opp_put_clkname,
> devm_pm_opp_set_regulators, devm_pm_opp_put_regulators,
> devm_pm_opp_set_supported_hw, devm_pm_opp_of_add_table and
> devm_pm_opp_register_notifier.
> 
> Yangtao Li (31):
>   opp: Add devres wrapper for dev_pm_opp_set_clkname and
> dev_pm_opp_put_clkname
>   opp: Add devres wrapper for dev_pm_opp_set_regulators and
> dev_pm_opp_put_regulators
>   opp: Add devres wrapper for dev_pm_opp_set_supported_hw
>   opp: Add devres wrapper for dev_pm_opp_of_add_table
>   opp: Add devres wrapper for dev_pm_opp_register_notifier
>   serial: qcom_geni_serial: fix potential mem leak in
> qcom_geni_serial_probe()
>   serial: qcom_geni_serial: convert to use devm_pm_opp_* API
>   spi: spi-qcom-qspi: fix potential mem leak in spi_geni_probe()
>   spi: spi-qcom-qspi: fix potential mem leak in spi_geni_probe()
>   qcom-geni-se: remove opp_table
>   mmc: sdhci-msm: fix potential mem leak in sdhci_msm_probe()
>   mmc: sdhci-msm: convert to use devm_pm_opp_* API
>   spi: spi-qcom-qspi: fix potential mem leak in qcom_qspi_probe()
>   spi: spi-qcom-qspi: convert to use devm_pm_opp_* API
>   drm/msm: fix potential mem leak
>   drm/msm: convert to use devm_pm_opp_* API and remove dp_ctrl_put
>   drm/lima: convert to use devm_pm_opp_* API
>   drm/lima: remove unneeded devm_devfreq_remove_device()
>   drm/panfrost: convert to use devm_pm_opp_* API
>   media: venus: fix error check in core_get_v4()
>   media: venus: convert to use devm_pm_opp_* API
>   memory: samsung: exynos5422-dmc: fix return error in
> exynos5_init_freq_table
>   memory: samsung: exynos5422-dmc: convert to use devm_pm_opp_* API
>   memory: tegra20: convert to use devm_pm_opp_* API
>   memory: tegra30: convert to use devm_pm_opp_* API
>   PM / devfreq: tegra30: convert to use devm_pm_opp_* API
>   PM / devfreq: rk3399_dmc: convert to use devm_pm_opp_* API
>   PM / devfreq: imx8m-ddrc: convert to use devm_pm_opp_* API
>   PM / devfreq: imx-bus: convert to use devm_pm_opp_* API
>   PM / devfreq: exynos: convert to use devm_pm_opp_* API
>   PM / devfreq: convert to devm_pm_opp_register_notifier and remove
> unused API
> 
>  drivers/devfreq/devfreq.c |  66 +--
>  drivers/devfreq/exynos-bus.c  |  42 +
>  drivers/devfreq/imx-bus.c |  14 +-
>  drivers/devfreq/imx8m-ddrc.c  |  15 +-
>  drivers/devfreq/rk3399_dmc.c  |  22 +--
>  drivers/devfreq/tegra30-devfreq.c |  21 +--
>  drivers/gpu/drm/lima/lima_devfreq.c   |  45 +
>  drivers/gpu/drm/lima/lima_devfreq.h   |   2 -
>  drivers/gpu/drm/msm/adreno/a5xx_gpu.c |   2 +-
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c |   2 +-
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c   |   2 +-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  31 ++--
>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   2 -
>  drivers/gpu/drm/msm/dp/dp_ctrl.c  |  29 +--
>  drivers/gpu/drm/msm/dp/dp_ctrl.h  |   1 -
>  drivers/gpu/drm/msm/dp/dp_display.c   |   5 +-
>  drivers/gpu/drm/msm/dsi/dsi_host.c|  23 ++-
>  drivers/gpu/drm/panfrost/panfrost_devfreq.c   |  34 +---
>  drivers/gpu/drm/panfrost/panfrost_devfreq.h   |   1 -
>  .../media/platform/qcom/venus/pm_helpers.c|  22 +--
>  drivers/memory/samsung/exynos5422-dmc.c   |  13 +-
>  drivers/memory/tegra/tegra20-emc.c|  29 +--
>  drivers/memory/tegra/tegra30-emc.c|  29 +--
>  drivers/mmc/host/sdhci-msm.c  |  27 ++-
>  drivers/opp/core.c| 173 ++
>  drivers/opp/of.c  |  36 
>  drivers/spi/spi-geni-qcom.c   |  23 ++-
>  drivers/spi/spi-qcom-qspi.c   |  25 ++-
>  drivers/tty/serial/qcom_geni_serial.c |  31 ++--
>  include/linux/devfreq.h   |  23 ---
>  include/linux/pm_opp.h|  38 
>  include/linux/qcom-geni-se.h  |   2 -
>  32 files changed, 402 insertions(+), 428 deletions(-)
> 

Hello,

Could you please add helper for dev_pm_opp_attach_genpd() and make
cpufreq drivers to use the helpers?

I'd also like to see a devm helper for
dev_pm_opp_register_set_opp_helper(), which should become useful for
Tegra drivers sometime soon.
___
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno


Re: [Freedreno] [PATCH 00/31] Introduce devm_pm_opp_* API

2021-01-03 Thread Dmitry Osipenko
03.01.2021 17:30, Frank Lee пишет:
> HI,
> 
> On Sun, Jan 3, 2021 at 8:52 PM Dmitry Osipenko  wrote:
>>
>> 01.01.2021 19:54, Yangtao Li пишет:
>>> Hi,
>>>
>>> This patchset add devm_pm_opp_set_clkname, devm_pm_opp_put_clkname,
>>> devm_pm_opp_set_regulators, devm_pm_opp_put_regulators,
>>> devm_pm_opp_set_supported_hw, devm_pm_opp_of_add_table and
>>> devm_pm_opp_register_notifier.
>>>
>>> Yangtao Li (31):
>>>   opp: Add devres wrapper for dev_pm_opp_set_clkname and
>>> dev_pm_opp_put_clkname
>>>   opp: Add devres wrapper for dev_pm_opp_set_regulators and
>>> dev_pm_opp_put_regulators
>>>   opp: Add devres wrapper for dev_pm_opp_set_supported_hw
>>>   opp: Add devres wrapper for dev_pm_opp_of_add_table
>>>   opp: Add devres wrapper for dev_pm_opp_register_notifier
>>>   serial: qcom_geni_serial: fix potential mem leak in
>>> qcom_geni_serial_probe()
>>>   serial: qcom_geni_serial: convert to use devm_pm_opp_* API
>>>   spi: spi-qcom-qspi: fix potential mem leak in spi_geni_probe()
>>>   spi: spi-qcom-qspi: fix potential mem leak in spi_geni_probe()
>>>   qcom-geni-se: remove opp_table
>>>   mmc: sdhci-msm: fix potential mem leak in sdhci_msm_probe()
>>>   mmc: sdhci-msm: convert to use devm_pm_opp_* API
>>>   spi: spi-qcom-qspi: fix potential mem leak in qcom_qspi_probe()
>>>   spi: spi-qcom-qspi: convert to use devm_pm_opp_* API
>>>   drm/msm: fix potential mem leak
>>>   drm/msm: convert to use devm_pm_opp_* API and remove dp_ctrl_put
>>>   drm/lima: convert to use devm_pm_opp_* API
>>>   drm/lima: remove unneeded devm_devfreq_remove_device()
>>>   drm/panfrost: convert to use devm_pm_opp_* API
>>>   media: venus: fix error check in core_get_v4()
>>>   media: venus: convert to use devm_pm_opp_* API
>>>   memory: samsung: exynos5422-dmc: fix return error in
>>> exynos5_init_freq_table
>>>   memory: samsung: exynos5422-dmc: convert to use devm_pm_opp_* API
>>>   memory: tegra20: convert to use devm_pm_opp_* API
>>>   memory: tegra30: convert to use devm_pm_opp_* API
>>>   PM / devfreq: tegra30: convert to use devm_pm_opp_* API
>>>   PM / devfreq: rk3399_dmc: convert to use devm_pm_opp_* API
>>>   PM / devfreq: imx8m-ddrc: convert to use devm_pm_opp_* API
>>>   PM / devfreq: imx-bus: convert to use devm_pm_opp_* API
>>>   PM / devfreq: exynos: convert to use devm_pm_opp_* API
>>>   PM / devfreq: convert to devm_pm_opp_register_notifier and remove
>>> unused API
>>>
>>>  drivers/devfreq/devfreq.c |  66 +--
>>>  drivers/devfreq/exynos-bus.c  |  42 +
>>>  drivers/devfreq/imx-bus.c |  14 +-
>>>  drivers/devfreq/imx8m-ddrc.c  |  15 +-
>>>  drivers/devfreq/rk3399_dmc.c  |  22 +--
>>>  drivers/devfreq/tegra30-devfreq.c |  21 +--
>>>  drivers/gpu/drm/lima/lima_devfreq.c   |  45 +
>>>  drivers/gpu/drm/lima/lima_devfreq.h   |   2 -
>>>  drivers/gpu/drm/msm/adreno/a5xx_gpu.c |   2 +-
>>>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c |   2 +-
>>>  drivers/gpu/drm/msm/adreno/adreno_gpu.c   |   2 +-
>>>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c   |  31 ++--
>>>  drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h   |   2 -
>>>  drivers/gpu/drm/msm/dp/dp_ctrl.c  |  29 +--
>>>  drivers/gpu/drm/msm/dp/dp_ctrl.h  |   1 -
>>>  drivers/gpu/drm/msm/dp/dp_display.c   |   5 +-
>>>  drivers/gpu/drm/msm/dsi/dsi_host.c|  23 ++-
>>>  drivers/gpu/drm/panfrost/panfrost_devfreq.c   |  34 +---
>>>  drivers/gpu/drm/panfrost/panfrost_devfreq.h   |   1 -
>>>  .../media/platform/qcom/venus/pm_helpers.c|  22 +--
>>>  drivers/memory/samsung/exynos5422-dmc.c   |  13 +-
>>>  drivers/memory/tegra/tegra20-emc.c|  29 +--
>>>  drivers/memory/tegra/tegra30-emc.c|  29 +--
>>>  drivers/mmc/host/sdhci-msm.c  |  27 ++-
>>>  drivers/opp/core.c| 173 ++
>>>  drivers/opp/of.c  |  36 
>>>  drivers/spi/spi-geni-qcom.c   |  23 ++-
>>>  drivers/spi/spi-qcom-qspi.c   |  25 ++-
>>>  drivers/tty/serial/qcom_geni_serial.c |  31 ++--
>>>  include/linux/devfreq.h   |  23 ---
>>>  include/l