Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-17 Thread Luben Tuikov
On 2020-03-12 06:56, Nirmoy wrote:
> 
> On 3/12/20 9:50 AM, Christian König wrote:
>> Am 11.03.20 um 21:55 schrieb Nirmoy:
>>>
>>> On 3/11/20 9:35 PM, Andrey Grodzovsky wrote:

 On 3/11/20 4:32 PM, Nirmoy wrote:
>
> On 3/11/20 9:02 PM, Andrey Grodzovsky wrote:
>>
>> On 3/11/20 4:00 PM, Andrey Grodzovsky wrote:
>>>
>>> On 3/11/20 4:00 PM, Nirmoy Das wrote:
 [SNIP]
 @@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
 amdgpu_cs_parser *p,
   priority = job->base.s_priority;
   drm_sched_entity_push_job(>base, entity);
   +    if (ring->funcs->no_gpu_sched_loadbalance)
 + amdgpu_ctx_disable_gpu_sched_load_balance(entity);
 +
>>>
>>>
>>> Why this needs to be done each time a job is submitted and not 
>>> once in drm_sched_entity_init (same foramdgpu_job_submit bellow ?)
>>>
>>> Andrey
>>
>>
>> My bad - not in drm_sched_entity_init but in relevant amdgpu code.
>
>
> Hi Andrey,
>
> Do you mean drm_sched_job_init() or after creating VCN entities?
>
>
> Nirmoy


 I guess after creating the VCN entities (has to be amdgpu specific 
 code) - I just don't get why it needs to be done each time job is 
 submitted, I mean - since you set .no_gpu_sched_loadbalance = true 
 anyway this is always true and so shouldn't you just initialize the 
 VCN entity with a schedulers list consisting of one scheduler and 
 that it ?
>>>
>>>
>>> Assumption: If I understand correctly we shouldn't be doing load 
>>> balance among VCN jobs in the same context. Christian, James and Leo 
>>> can clarify that if I am wrong.
>>>
>>> But we can still do load balance of VNC jobs among multiple contexts. 
>>> That load balance decision happens in drm_sched_entity_init(). If we 
>>> initialize VCN entity with one scheduler then
>>>
>>> all entities irrespective of context gets that one scheduler which 
>>> means we are not utilizing extra VNC instances.
>>
>> Andrey has a very good point here. So far we only looked at this from 
>> the hardware requirement side that we can't change the ring after the 
>> first submission any more.
>>
>> But it is certainly valuable to keep the extra overhead out of the hot 
>> path during command submission.
> 
> 
> 
>>
>>> Ideally we should be calling 
>>> amdgpu_ctx_disable_gpu_sched_load_balance() only once after 1st call 
>>> of drm_sched_entity_init() of a VCN job. I am not sure how to do that 
>>> efficiently.
>>>
>>> Another option might be to copy the logic of 
>>> drm_sched_entity_get_free_sched() and choose suitable VNC sched 
>>> at/after VCN entity creation.
>>
>> Yes, but we should not copy the logic but rather refactor it :)
>>
>> Basically we need a drm_sched_pick_best() function which gets an array 
>> of drm_gpu_scheduler structures and returns the one with the least 
>> load on it.
>>
>> This function can then be used by VCN to pick one instance before 
>> initializing the entity as well as a replacement for 
>> drm_sched_entity_get_free_sched() to change the scheduler for load 
>> balancing.
> 
> 
> This sounds like a optimum solution here.
> 
> Thanks Andrey and Christian. I will resend with suggested changes.

Note that this isn't an optimal solution. Note that drm_sched_pick_best()
and drm_sched_entity_get_free_sched() (these names are too long), are similar
in what they do, in that they pick a scheduler, which is still a centralized
decision making.

An optimal solution would be for each execution unit to pick work
when work is available, which is a decentralized decision model.

Not sure how an array would be used, as the proposition here is
laid out--would that be an O(n) search through the array?

In any case, centralized decision making introduces a bottleneck. Decentralized
solutions are available for scheduling with O(1) time complexity.

Regards,
Luben


> 
> 
>>
>> Regards,
>> Christian.
>>
>>>
>>>
>>> Regards,
>>>
>>> Nirmoy
>>>
>>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=02%7C01%7Cluben.tuikov%40amd.com%7C903b7b3a6faf480f1a7908d7c6738bae%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637196071895640764sdata=RrPrZ5aHVOhMd5H8wEqCt%2FPPSBNLCyRVwDoLBU4p3Iw%3Dreserved=0
> 

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-12 Thread Christian König

Am 11.03.20 um 21:55 schrieb Nirmoy:


On 3/11/20 9:35 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:32 PM, Nirmoy wrote:


On 3/11/20 9:02 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Nirmoy Das wrote:

[SNIP]
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
  +    if (ring->funcs->no_gpu_sched_loadbalance)
+ amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+



Why this needs to be done each time a job is submitted and not 
once in drm_sched_entity_init (same foramdgpu_job_submit bellow ?)


Andrey



My bad - not in drm_sched_entity_init but in relevant amdgpu code.



Hi Andrey,

Do you mean drm_sched_job_init() or after creating VCN entities?


Nirmoy



I guess after creating the VCN entities (has to be amdgpu specific 
code) - I just don't get why it needs to be done each time job is 
submitted, I mean - since you set .no_gpu_sched_loadbalance = true 
anyway this is always true and so shouldn't you just initialize the 
VCN entity with a schedulers list consisting of one scheduler and 
that it ?



Assumption: If I understand correctly we shouldn't be doing load 
balance among VCN jobs in the same context. Christian, James and Leo 
can clarify that if I am wrong.


But we can still do load balance of VNC jobs among multiple contexts. 
That load balance decision happens in drm_sched_entity_init(). If we 
initialize VCN entity with one scheduler then


all entities irrespective of context gets that one scheduler which 
means we are not utilizing extra VNC instances.


Andrey has a very good point here. So far we only looked at this from 
the hardware requirement side that we can't change the ring after the 
first submission any more.


But it is certainly valuable to keep the extra overhead out of the hot 
path during command submission.


Ideally we should be calling 
amdgpu_ctx_disable_gpu_sched_load_balance() only once after 1st call 
of drm_sched_entity_init() of a VCN job. I am not sure how to do that 
efficiently.


Another option might be to copy the logic of 
drm_sched_entity_get_free_sched() and choose suitable VNC sched 
at/after VCN entity creation.


Yes, but we should not copy the logic but rather refactor it :)

Basically we need a drm_sched_pick_best() function which gets an array 
of drm_gpu_scheduler structures and returns the one with the least load 
on it.


This function can then be used by VCN to pick one instance before 
initializing the entity as well as a replacement for 
drm_sched_entity_get_free_sched() to change the scheduler for load 
balancing.


Regards,
Christian.




Regards,

Nirmoy



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Nirmoy


On 3/11/20 9:35 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:32 PM, Nirmoy wrote:


On 3/11/20 9:02 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c    |  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  union drm_amdgpu_cs *cs)
  {
  struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+    struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
  struct drm_sched_entity *entity = p->entity;
  enum drm_sched_priority priority;
  struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
  +    if (ring->funcs->no_gpu_sched_loadbalance)
+ amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+



Why this needs to be done each time a job is submitted and not once 
in drm_sched_entity_init (same foramdgpu_job_submit bellow ?)


Andrey



My bad - not in drm_sched_entity_init but in relevant amdgpu code.



Hi Andrey,

Do you mean drm_sched_job_init() or after creating VCN entities?


Nirmoy



I guess after creating the VCN entities (has to be amdgpu specific 
code) - I just don't get why it needs to be done each time job is 
submitted, I mean - since you set .no_gpu_sched_loadbalance = true 
anyway this is always true and so shouldn't you just initialize the 
VCN entity with a schedulers list consisting of one scheduler and that 
it ?



Assumption: If I understand correctly we shouldn't be doing load balance 
among VCN jobs in the same context. Christian, James and Leo can clarify 
that if I am wrong.


But we can still do load balance of VNC jobs among multiple contexts. 
That load balance decision happens in drm_sched_entity_init(). If we 
initialize VCN entity with one scheduler then


all entities irrespective of context gets that one scheduler which means 
we are not utilizing extra VNC instances.



Ideally we should be calling amdgpu_ctx_disable_gpu_sched_load_balance() 
only once after 1st call of drm_sched_entity_init() of a VCN job. I am 
not sure how to do that efficiently.


Another option might be to copy the logic of 
drm_sched_entity_get_free_sched() and choose suitable VNC sched at/after 
VCN entity creation.



Regards,

Nirmoy




Andrey






Andrey






amdgpu_vm_move_to_lru_tail(p->adev, >vm);
    ttm_eu_fence_buffer_objects(>ticket, >validated, 
p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct 
amdgpu_ctx *ctx,

  }
  }
  +/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable 
gpu_sched's load balancer

+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity)

+{
+    struct drm_gpu_scheduler **scheds = >rq->sched;
+
+    /* disable gpu_sched's job load balancer by assigning only 
one */

+    /* drm scheduler to the entity */
+    drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
 struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h

index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr 
*mgr);

    void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  +void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity);

    #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c

index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ 

Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Andrey Grodzovsky


On 3/11/20 4:32 PM, Nirmoy wrote:


On 3/11/20 9:02 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c    |  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  union drm_amdgpu_cs *cs)
  {
  struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+    struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
  struct drm_sched_entity *entity = p->entity;
  enum drm_sched_priority priority;
  struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
  +    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+



Why this needs to be done each time a job is submitted and not once 
in drm_sched_entity_init (same foramdgpu_job_submit bellow ?)


Andrey



My bad - not in drm_sched_entity_init but in relevant amdgpu code.



Hi Andrey,

Do you mean drm_sched_job_init() or after creating VCN entities?


Nirmoy



I guess after creating the VCN entities (has to be amdgpu specific code) 
- I just don't get why it needs to be done each time job is submitted, I 
mean - since you set .no_gpu_sched_loadbalance = true anyway this is 
always true and so shouldn't you just initialize the VCN entity with a 
schedulers list consisting of one scheduler and that it ?


Andrey






Andrey






amdgpu_vm_move_to_lru_tail(p->adev, >vm);
    ttm_eu_fence_buffer_objects(>ticket, >validated, 
p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct 
amdgpu_ctx *ctx,

  }
  }
  +/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's 
load balancer

+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity)

+{
+    struct drm_gpu_scheduler **scheds = >rq->sched;
+
+    /* disable gpu_sched's job load balancer by assigning only one */
+    /* drm scheduler to the entity */
+    drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
 struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h

index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr 
*mgr);

    void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  +void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity);

    #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c

index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
  int amdgpu_job_submit(struct amdgpu_job *job, struct 
drm_sched_entity *entity,

    void *owner, struct dma_fence **f)
  {
+    struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
  enum drm_sched_priority priority;
  int r;
  @@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, 
struct drm_sched_entity *entity,

  amdgpu_job_free_resources(job);
  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
+    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
    return 0;
  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h

index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ 

Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Nirmoy


On 3/11/20 9:14 PM, James Zhu wrote:


On 2020-03-11 4:00 p.m., Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c    |  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  union drm_amdgpu_cs *cs)
  {
  struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+    struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
  struct drm_sched_entity *entity = p->entity;
  enum drm_sched_priority priority;
  struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
  +    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);


Does this mean that only vcn IP instances 0 dec/enc will be scheduled?


No, not really.  drm_sched_job_init() gets called before 
amdgpu_ctx_disable_gpu_sched_load_balance().


1st drm_sched_job_init() call will choose a least loaded VCN instance 
and that VCN instance will stay for the whole context life.



Regards,

Nirmoy



Best Regards!

James


+
  amdgpu_vm_move_to_lru_tail(p->adev, >vm);
    ttm_eu_fence_buffer_objects(>ticket, >validated, 
p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct 
amdgpu_ctx *ctx,

  }
  }
  +/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's 
load balancer

+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity)

+{
+    struct drm_gpu_scheduler **scheds = >rq->sched;
+
+    /* disable gpu_sched's job load balancer by assigning only one */
+    /* drm scheduler to the entity */
+    drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
 struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h

index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
    void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  +void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity);

    #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c

index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
  int amdgpu_job_submit(struct amdgpu_job *job, struct 
drm_sched_entity *entity,

    void *owner, struct dma_fence **f)
  {
+    struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
  enum drm_sched_priority priority;
  int r;
  @@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, 
struct drm_sched_entity *entity,

  amdgpu_job_free_resources(job);
  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
+    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
    return 0;
  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h

index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
  u32    nop;
  bool    support_64bit_ptrs;
  bool    no_user_fence;
+    bool    no_gpu_sched_loadbalance;
  unsigned    vmhub;
  unsigned    extra_dw;
  diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c

index 

Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Nirmoy


On 3/11/20 9:02 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c    |  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  union drm_amdgpu_cs *cs)
  {
  struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+    struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
  struct drm_sched_entity *entity = p->entity;
  enum drm_sched_priority priority;
  struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
  +    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+



Why this needs to be done each time a job is submitted and not once 
in drm_sched_entity_init (same foramdgpu_job_submit bellow ?)


Andrey



My bad - not in drm_sched_entity_init but in relevant amdgpu code.



Hi Andrey,

Do you mean drm_sched_job_init() or after creating VCN entities?


Nirmoy



Andrey






amdgpu_vm_move_to_lru_tail(p->adev, >vm);
    ttm_eu_fence_buffer_objects(>ticket, >validated, 
p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct 
amdgpu_ctx *ctx,

  }
  }
  +/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's 
load balancer

+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity)

+{
+    struct drm_gpu_scheduler **scheds = >rq->sched;
+
+    /* disable gpu_sched's job load balancer by assigning only one */
+    /* drm scheduler to the entity */
+    drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
 struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h

index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
    void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  +void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity);

    #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c

index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
  int amdgpu_job_submit(struct amdgpu_job *job, struct 
drm_sched_entity *entity,

    void *owner, struct dma_fence **f)
  {
+    struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
  enum drm_sched_priority priority;
  int r;
  @@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, 
struct drm_sched_entity *entity,

  amdgpu_job_free_resources(job);
  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
+    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
    return 0;
  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h

index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
  u32    nop;
  bool    support_64bit_ptrs;
  bool    no_user_fence;
+    bool    no_gpu_sched_loadbalance;
  unsigned    vmhub;
  unsigned    extra_dw;
  diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c

index 

Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread James Zhu



On 2020-03-11 4:00 p.m., Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c|  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c|  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c|  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
union drm_amdgpu_cs *cs)
  {
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
  
+	if (ring->funcs->no_gpu_sched_loadbalance)

+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);


Does this mean that only vcn IP instances 0 dec/enc will be scheduled?

Best Regards!

James


+
amdgpu_vm_move_to_lru_tail(p->adev, >vm);
  
  	ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
}
  }
  
+/**

+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's load 
balancer
+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity *entity)
+{
+   struct drm_gpu_scheduler **scheds = >rq->sched;
+
+   /* disable gpu_sched's job load balancer by assigning only one */
+   /* drm scheduler to the entity */
+   drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
   struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
  
  void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity *entity);
  
  #endif

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
  int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
  void *owner, struct dma_fence **f)
  {
+   struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
enum drm_sched_priority priority;
int r;
  
@@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,

amdgpu_job_free_resources(job);
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
+   if (ring->funcs->no_gpu_sched_loadbalance)
+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
  
  	return 0;

  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
u32 nop;
boolsupport_64bit_ptrs;
boolno_user_fence;
+   boolno_gpu_sched_loadbalance;
unsignedvmhub;
unsignedextra_dw;
  
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c

index 71f61afdc655..749ccdb5fbfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1871,6 +1871,7 @@ static const struct amdgpu_ring_funcs 

Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Andrey Grodzovsky


On 3/11/20 4:00 PM, Andrey Grodzovsky wrote:


On 3/11/20 4:00 PM, Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c    |  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c    |  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  union drm_amdgpu_cs *cs)
  {
  struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+    struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
  struct drm_sched_entity *entity = p->entity;
  enum drm_sched_priority priority;
  struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct 
amdgpu_cs_parser *p,

  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
  +    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+



Why this needs to be done each time a job is submitted and not once in 
drm_sched_entity_init (same foramdgpu_job_submit bellow ?)


Andrey



My bad - not in drm_sched_entity_init but in relevant amdgpu code.

Andrey






amdgpu_vm_move_to_lru_tail(p->adev, >vm);
    ttm_eu_fence_buffer_objects(>ticket, >validated, 
p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct 
amdgpu_ctx *ctx,

  }
  }
  +/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's 
load balancer

+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity)

+{
+    struct drm_gpu_scheduler **scheds = >rq->sched;
+
+    /* disable gpu_sched's job load balancer by assigning only one */
+    /* drm scheduler to the entity */
+    drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
 struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h

index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
    void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  +void amdgpu_ctx_disable_gpu_sched_load_balance(struct 
drm_sched_entity *entity);

    #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c

index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
  int amdgpu_job_submit(struct amdgpu_job *job, struct 
drm_sched_entity *entity,

    void *owner, struct dma_fence **f)
  {
+    struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
  enum drm_sched_priority priority;
  int r;
  @@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, 
struct drm_sched_entity *entity,

  amdgpu_job_free_resources(job);
  priority = job->base.s_priority;
  drm_sched_entity_push_job(>base, entity);
+    if (ring->funcs->no_gpu_sched_loadbalance)
+    amdgpu_ctx_disable_gpu_sched_load_balance(entity);
    return 0;
  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h

index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
  u32    nop;
  bool    support_64bit_ptrs;
  bool    no_user_fence;
+    bool    no_gpu_sched_loadbalance;
  unsigned    vmhub;
  unsigned    extra_dw;
  diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c

index 71f61afdc655..749ccdb5fbfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1871,6 +1871,7 @@ static const 

Re: [PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Andrey Grodzovsky



On 3/11/20 4:00 PM, Nirmoy Das wrote:

VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c|  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c|  2 ++
  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c|  2 ++
  8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
union drm_amdgpu_cs *cs)
  {
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
  
+	if (ring->funcs->no_gpu_sched_loadbalance)

+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+



Why this needs to be done each time a job is submitted and not once in 
drm_sched_entity_init (same foramdgpu_job_submit bellow ?)


Andrey



amdgpu_vm_move_to_lru_tail(p->adev, >vm);
  
  	ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
}
  }
  
+/**

+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's load 
balancer
+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity *entity)
+{
+   struct drm_gpu_scheduler **scheds = >rq->sched;
+
+   /* disable gpu_sched's job load balancer by assigning only one */
+   /* drm scheduler to the entity */
+   drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
   struct drm_sched_entity *entity)
  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
  
  void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
  
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity *entity);
  
  #endif

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
  int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
  void *owner, struct dma_fence **f)
  {
+   struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
enum drm_sched_priority priority;
int r;
  
@@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,

amdgpu_job_free_resources(job);
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
+   if (ring->funcs->no_gpu_sched_loadbalance)
+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
  
  	return 0;

  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
u32 nop;
boolsupport_64bit_ptrs;
boolno_user_fence;
+   boolno_gpu_sched_loadbalance;
unsignedvmhub;
unsignedextra_dw;
  
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c

index 71f61afdc655..749ccdb5fbfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1871,6 +1871,7 @@ 

[PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Nirmoy Das
VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 13 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c|  2 ++
 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c|  2 ++
 drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c|  2 ++
 8 files changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
union drm_amdgpu_cs *cs)
 {
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
 
+   if (ring->funcs->no_gpu_sched_loadbalance)
+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+
amdgpu_vm_move_to_lru_tail(p->adev, >vm);
 
ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index fa575bdc03c8..1127e8f77721 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,19 @@ void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
}
 }
 
+/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's load 
balancer
+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity *entity)
+{
+   struct drm_gpu_scheduler **scheds = >rq->sched;
+
+   /* disable gpu_sched's job load balancer by assigning only one */
+   /* drm scheduler to the entity */
+   drm_sched_entity_modify_sched(entity, scheds, 1);
+}
+
 int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
   struct drm_sched_entity *entity)
 {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
 
 void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
 
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity 
*entity);
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
 int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
  void *owner, struct dma_fence **f)
 {
+   struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
enum drm_sched_priority priority;
int r;
 
@@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct 
drm_sched_entity *entity,
amdgpu_job_free_resources(job);
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
+   if (ring->funcs->no_gpu_sched_loadbalance)
+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
 
return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
u32 nop;
boolsupport_64bit_ptrs;
boolno_user_fence;
+   boolno_gpu_sched_loadbalance;
unsignedvmhub;
unsignedextra_dw;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 71f61afdc655..749ccdb5fbfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1871,6 +1871,7 @@ static const struct amdgpu_ring_funcs 
vcn_v1_0_dec_ring_vm_funcs = {
.align_mask = 0xf,
.support_64bit_ptrs = false,
.no_user_fence = true,
+   .no_gpu_sched_loadbalance = 

[PATCH 1/1] drm/amdgpu: disable gpu_sched load balancer for vcn jobs

2020-03-11 Thread Nirmoy Das
VCN HW  doesn't support dynamic load balance on multiple
instances for a context. This patch modifies entity's
sched_list to a sched_list consist of only one drm scheduler.

Signed-off-by: Nirmoy Das 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  4 
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c  | 14 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h  |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c|  2 ++
 drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c|  2 ++
 drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c|  2 ++
 8 files changed, 29 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 8304d0c87899..db0eef19c636 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1203,6 +1203,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
union drm_amdgpu_cs *cs)
 {
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+   struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched);
struct drm_sched_entity *entity = p->entity;
enum drm_sched_priority priority;
struct amdgpu_bo_list_entry *e;
@@ -1257,6 +1258,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
 
+   if (ring->funcs->no_gpu_sched_loadbalance)
+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
+
amdgpu_vm_move_to_lru_tail(p->adev, >vm);
 
ttm_eu_fence_buffer_objects(>ticket, >validated, p->fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index fa575bdc03c8..d699207d6266 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -559,6 +559,20 @@ void amdgpu_ctx_priority_override(struct amdgpu_ctx *ctx,
}
 }
 
+/**
+ * amdgpu_ctx_disable_gpu_sched_load_balance - disable gpu_sched's load 
balancer
+ * @entity: entity on which load balancer will be disabled
+ */
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity *entity)
+{
+   struct drm_gpu_scheduler **scheds = >rq->sched;
+
+   /* disable gpu_sched's job load balancer by assigning only one */
+   /* drm scheduler to the entity */
+   drm_sched_entity_modify_sched(entity, scheds, 1);
+
+}
+
 int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx,
   struct drm_sched_entity *entity)
 {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
index de490f183af2..3a2f900b8000 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h
@@ -90,5 +90,6 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
 
 void amdgpu_ctx_init_sched(struct amdgpu_device *adev);
 
+void amdgpu_ctx_disable_gpu_sched_load_balance(struct drm_sched_entity 
*entity);
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 4981e443a884..64dad7ba74da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -140,6 +140,7 @@ void amdgpu_job_free(struct amdgpu_job *job)
 int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
  void *owner, struct dma_fence **f)
 {
+   struct amdgpu_ring *ring = to_amdgpu_ring(entity->rq->sched);
enum drm_sched_priority priority;
int r;
 
@@ -154,6 +155,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct 
drm_sched_entity *entity,
amdgpu_job_free_resources(job);
priority = job->base.s_priority;
drm_sched_entity_push_job(>base, entity);
+   if (ring->funcs->no_gpu_sched_loadbalance)
+   amdgpu_ctx_disable_gpu_sched_load_balance(entity);
 
return 0;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 448c76cbf3ed..f78fe1a6912b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -115,6 +115,7 @@ struct amdgpu_ring_funcs {
u32 nop;
boolsupport_64bit_ptrs;
boolno_user_fence;
+   boolno_gpu_sched_loadbalance;
unsignedvmhub;
unsignedextra_dw;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c 
b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 71f61afdc655..749ccdb5fbfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1871,6 +1871,7 @@ static const struct amdgpu_ring_funcs 
vcn_v1_0_dec_ring_vm_funcs = {
.align_mask = 0xf,
.support_64bit_ptrs = false,
.no_user_fence = true,
+   .no_gpu_sched_loadbalance =