Re: [Intel-gfx] [PATCH 10/19] drm/i915/execlists: Assert there are no simple cycles in the dependencies

2018-01-03 Thread Michał Winiarski
On Tue, Jan 02, 2018 at 03:12:26PM +, Chris Wilson wrote:
> The dependency chain must be an acyclic graph. This is checked by the
> swfence, but for sanity, also do a simple check that we do not corrupt
> our list iteration in execlists_schedule() by a shallow dependency
> cycle.
> 
> Signed-off-by: Chris Wilson 

Reviewed-by: Michał Winiarski 

-Michał

> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 11 ---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c 
> b/drivers/gpu/drm/i915/intel_lrc.c
> index 007aec9d95c9..8c9d6cef2482 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1006,7 +1006,8 @@ static void execlists_schedule(struct 
> drm_i915_gem_request *request, int prio)
>   stack.signaler = &request->priotree;
>   list_add(&stack.dfs_link, &dfs);
>  
> - /* Recursively bump all dependent priorities to match the new request.
> + /*
> +  * Recursively bump all dependent priorities to match the new request.
>*
>* A naive approach would be to use recursion:
>* static void update_priorities(struct i915_priotree *pt, prio) {
> @@ -1026,12 +1027,15 @@ static void execlists_schedule(struct 
> drm_i915_gem_request *request, int prio)
>   list_for_each_entry_safe(dep, p, &dfs, dfs_link) {
>   struct i915_priotree *pt = dep->signaler;
>  
> - /* Within an engine, there can be no cycle, but we may
> + /*
> +  * Within an engine, there can be no cycle, but we may
>* refer to the same dependency chain multiple times
>* (redundant dependencies are not eliminated) and across
>* engines.
>*/
>   list_for_each_entry(p, &pt->signalers_list, signal_link) {
> + GEM_BUG_ON(p == dep); /* no cycles! */
> +
>   if 
> (i915_gem_request_completed(priotree_to_request(p->signaler)))
>   continue;
>  
> @@ -1043,7 +1047,8 @@ static void execlists_schedule(struct 
> drm_i915_gem_request *request, int prio)
>   list_safe_reset_next(dep, p, dfs_link);
>   }
>  
> - /* If we didn't need to bump any existing priorities, and we haven't
> + /*
> +  * If we didn't need to bump any existing priorities, and we haven't
>* yet submitted this request (i.e. there is no potential race with
>* execlists_submit_request()), we can set our own priority and skip
>* acquiring the engine locks.
> -- 
> 2.15.1
> 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 10/19] drm/i915/execlists: Assert there are no simple cycles in the dependencies

2018-01-02 Thread Chris Wilson
The dependency chain must be an acyclic graph. This is checked by the
swfence, but for sanity, also do a simple check that we do not corrupt
our list iteration in execlists_schedule() by a shallow dependency
cycle.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/intel_lrc.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 007aec9d95c9..8c9d6cef2482 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1006,7 +1006,8 @@ static void execlists_schedule(struct 
drm_i915_gem_request *request, int prio)
stack.signaler = &request->priotree;
list_add(&stack.dfs_link, &dfs);
 
-   /* Recursively bump all dependent priorities to match the new request.
+   /*
+* Recursively bump all dependent priorities to match the new request.
 *
 * A naive approach would be to use recursion:
 * static void update_priorities(struct i915_priotree *pt, prio) {
@@ -1026,12 +1027,15 @@ static void execlists_schedule(struct 
drm_i915_gem_request *request, int prio)
list_for_each_entry_safe(dep, p, &dfs, dfs_link) {
struct i915_priotree *pt = dep->signaler;
 
-   /* Within an engine, there can be no cycle, but we may
+   /*
+* Within an engine, there can be no cycle, but we may
 * refer to the same dependency chain multiple times
 * (redundant dependencies are not eliminated) and across
 * engines.
 */
list_for_each_entry(p, &pt->signalers_list, signal_link) {
+   GEM_BUG_ON(p == dep); /* no cycles! */
+
if 
(i915_gem_request_completed(priotree_to_request(p->signaler)))
continue;
 
@@ -1043,7 +1047,8 @@ static void execlists_schedule(struct 
drm_i915_gem_request *request, int prio)
list_safe_reset_next(dep, p, dfs_link);
}
 
-   /* If we didn't need to bump any existing priorities, and we haven't
+   /*
+* If we didn't need to bump any existing priorities, and we haven't
 * yet submitted this request (i.e. there is no potential race with
 * execlists_submit_request()), we can set our own priority and skip
 * acquiring the engine locks.
-- 
2.15.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx