Am 12.02.20 um 07:23 schrieb Pan, Xinhui:
2020年2月11日 23:43,Christian König <ckoenig.leichtzumer...@gmail.com> 写道:
When non-imported BOs are resurrected for delayed delete we replace
the dma_resv object to allow for easy reclaiming of the resources.
v2: move that to ttm_bo_individualize_resv
v3: add a comment to explain what's going on
Signed-off-by: Christian König <christian.koe...@amd.com>
Reviewed-by: xinhui pan <xinhui....@amd.com>
---
drivers/gpu/drm/ttm/ttm_bo.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index bfc42a9e4fb4..8174603d390f 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -393,6 +393,18 @@ static int ttm_bo_individualize_resv(struct
ttm_buffer_object *bo)
r = dma_resv_copy_fences(&bo->base._resv, bo->base.resv);
dma_resv_unlock(&bo->base._resv);
+ if (r)
+ return r;
+
+ if (bo->type != ttm_bo_type_sg) {
+ /* This works because the BO is about to be destroyed and nobody
+ * reference it any more. The only tricky case is the trylock on
+ * the resv object while holding the lru_lock.
+ */
+ spin_lock(&ttm_bo_glob.lru_lock);
+ bo->base.resv = &bo->base._resv;
+ spin_unlock(&ttm_bo_glob.lru_lock);
+ }
how about something like that.
the basic idea is to do the bo cleanup work in bo release first and avoid any
race with evict.
As in bo dieing progress, evict also just do bo cleanup work.
If bo is busy, neither bo_release nor evict can do cleanupwork on it. For the
bo release case, we just add bo back to lru list.
So we can clean it up both in workqueue and shrinker as the past way did.
@@ -405,8 +405,9 @@ static int ttm_bo_individualize_resv(struct
ttm_buffer_object *bo)
if (bo->type != ttm_bo_type_sg) {
spin_lock(&ttm_bo_glob.lru_lock);
- bo->base.resv = &bo->base._resv;
+ ttm_bo_del_from_lru(bo);
spin_unlock(&ttm_bo_glob.lru_lock);
+ bo->base.resv = &bo->base._resv;
}
return r;
@@ -606,10 +607,9 @@ static void ttm_bo_release(struct kref *kref)
* shrinkers, now that they are queued for
* destruction.
*/
- if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) {
+ if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT)
bo->mem.placement &= ~TTM_PL_FLAG_NO_EVICT;
- ttm_bo_move_to_lru_tail(bo, NULL);
- }
+ ttm_bo_add_mem_to_lru(bo, &bo->mem);
kref_init(&bo->kref);
list_add_tail(&bo->ddestroy, &bdev->ddestroy);
Yeah, thought about that as well. But this has the major drawback that
the deleted BO moves to the end of the LRU, which is something we don't
want.
I think the real solution to this problem is to go a completely
different way and remove the delayed delete feature from TTM altogether.
Instead this should be part of some DRM domain handler component.
In other words it should not matter if a BO is evicted, moved or freed.
Whenever a piece of memory becomes available again we keep around a
fence which marks the end of using this piece of memory.
When then somebody asks for new memory we work through the LRU and test
if using a certain piece of memory makes sense or not. If we find that a
BO needs to be evicted for this we return a reference to the BO in
question to the upper level handling.
If we find that we can do the allocation but only with recently freed up
memory we gather the fences and say you can only use the newly allocated
memory after waiting for those.
HEY! Wait a second! Did I just outlined what a potential replacement to
TTM would look like?
Cheers,
Christian.
thanks
xinhui
return r;
}
@@ -724,7 +736,7 @@ static bool ttm_bo_evict_swapout_allowable(struct
ttm_buffer_object *bo,
if (bo->base.resv == ctx->resv) {
dma_resv_assert_held(bo->base.resv);
- if (ctx->flags & TTM_OPT_FLAG_ALLOW_RES_EVICT || bo->deleted)
+ if (ctx->flags & TTM_OPT_FLAG_ALLOW_RES_EVICT)
ret = true;
*locked = false;
if (busy)
--
2.17.1
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cxinhui.pan%40amd.com%7Cb184dff5aaf349e2210008d7af092637%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637170326204966375&sdata=KdZN1l%2FkDYodXxPQgaXaSXUvMz2RHxysSSF9krQRgpI%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx