On 2022-09-27 19:06, Sharma, Shashank wrote: > On 9/27/2022 6:33 PM, Michel Dänzer wrote: >> On 2022-09-27 13:47, Sharma, Shashank wrote: >>> On 9/27/2022 12:03 PM, Lazar, Lijo wrote: >>>> On 9/27/2022 3:10 AM, Shashank Sharma wrote: >>>>> This patch and switches the GPU workload based profile based >>>>> on the workload hint information saved in the workload context. >>>>> The workload profile is reset to NONE when the job is done. >>>>> >>>>> Signed-off-by: Alex Deucher <alexander.deuc...@amd.com> >>>>> Signed-off-by: Shashank Sharma <shashank.sha...@amd.com> >>>>> --- >>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 ++ >>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx_workload.c | 4 ---- >>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 15 +++++++++++++++ >>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 3 +++ >>>>> 4 files changed, 20 insertions(+), 4 deletions(-) >>>>> >>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c >>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c >>>>> index b7bae833c804..de906a42144f 100644 >>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c >>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c >>>>> @@ -237,6 +237,8 @@ static int amdgpu_cs_parser_init(struct >>>>> amdgpu_cs_parser *p, union drm_amdgpu_cs >>>>> goto free_all_kdata; >>>>> } >>>>> + p->job->workload_mode = p->ctx->workload_mode; >>>>> + >>>>> if (p->uf_entry.tv.bo) >>>>> p->job->uf_addr = uf_offset; >>>>> kvfree(chunk_array); >>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx_workload.c >>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx_workload.c >>>>> index a11cf29bc388..625114804121 100644 >>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx_workload.c >>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx_workload.c >>>>> @@ -55,15 +55,11 @@ int amdgpu_set_workload_profile(struct amdgpu_device >>>>> *adev, >>>>> mutex_lock(&adev->pm.smu_workload_lock); >>>>> - if (adev->pm.workload_mode == hint) >>>>> - goto unlock; >>>>> - >>>> >>>> What is the expectation when a GFX job + VCN job together (or in general >>>> two jobs running in separate schedulers) and each prefers a different >>>> workload type? FW will switch as requested. >>> >>> Well, I guess the last switched mode will take over. Do note that like most >>> of the PM features, the real benefit of power profiles can be seen with >>> consistant and similar workloads running for some time (Like gaming, video >>> playback etc). >> >> Not sure how that's supposed to work on a general purpose system, where >> there are always expected to be multiple processes (one of which being the >> display server) using the GPU for different workloads. >> >> Even in special cases there may be multiple different kinds of workloads >> constantly being used at the same time, e.g. a fullscreen game with live >> streaming / recording using VCN. >> > It looks like we can accommodate that now, see the recent discussion with > Felix in the patch 5, where we see that "amdgpu_dpm_switch_power_profile > enables and disables individual profiles, Disabling the 3D profile doesn't > disable the compute profile at the same time" > > So I think we won't be overwriting but would be enabling/disabling individual > profiles for compute/3D/MM etc. Of course I will have to update the patch > series accordingly. > >> >> Have you guys considered letting the display server (DRM master) choose the >> profile instead? >> > This seems to be very good input, in case of a further conflict in decision > making, we might > > as well add this intelligence in DRM master. Would you mind explaining this a > bit more on how do you think it should be done ?
I don't have any specific ideas offhand; it was just an idea that happened to come to my mind, not sure it's a good one at all. Anyway, I think one important thing is that the same circumstances consistently result in the same profile being chosen. If it depends on luck / timing, that's a troubleshooting nightmare. -- Earthling Michel Dänzer | https://redhat.com Libre software enthusiast | Mesa and Xwayland developer