RE: [PATCH] WIP: drm/dp_mst: Add support for dumping topology ref histories from debugfs

2022-04-24 Thread Lin, Wayne
[AMD Official Use Only - General]

Thank you Lyude!

I just have a tentative patch set to deal with this. Would like to give a test 
on some platforms first.
If the result is positive, will send out for review immediately.

Thanks for tracking on this : )

Regards,
Wayne Lin

> -Original Message-
> From: Lyude Paul 
> Sent: Thursday, April 21, 2022 7:01 AM
> To: Lin, Wayne 
> Cc: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] WIP: drm/dp_mst: Add support for dumping topology ref 
> histories from debugfs
>
> Hey! Figured I'd check if there's been any status updates here since it's been
> a while, just to make sure I haven't dropped this issue from my radar. No
> problem if you're busy :)
>
> On Wed, 2022-03-16 at 10:46 +, Lin, Wayne wrote:
> > [Public]
> >
> > > -Original Message-
> > > From: Lyude Paul 
> > > Sent: Wednesday, March 16, 2022 8:48 AM
> > > To: Lin, Wayne 
> > > Cc: dri-de...@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
> > > Subject: Re: [PATCH] WIP: drm/dp_mst: Add support for dumping topology ref
> > > histories from debugfs
> > >
> > > (Adding this back to the dri-devel mailing list since I didn't notice it
> > > got
> > > dropped from there)
> > >
> > > Hm, some comments on this issue down below. Sorry for the delayed
> > > response, I
> > > was going to try this right after I finished the MST legacy removal but
> > > that's
> > > ending up taking longer than I hoped.
> > >
> > > On Tue, 2022-03-08 at 01:46 +, Lin, Wayne wrote:
> > > > [AMD Official Use Only]
> > > >
> > > > Oops...
> > > > I didn't notice that I replied to wrong mail previously : P
> > > > Sorry for the confusion and reply to the correct thread.
> > > >
> > > > Good day!
> > > > I probably know why you can't reproduce the issue there. Please refer to
> > > > the
> > > > attached
> > > > patch file which makes me easier to explain this.
> > > >
> > > > In dm_dp_mst_get_modes(), we will create a new dc_sink by calling
> > > > dc_link_add_remote_sink() and add that dc_sink into the array dc_link-
> > > > > remote_sinks[].
> > > > However, if we find that aconnector->dc_sink is not null, then we won't
> > > > create a new
> > > > dc_sink for that aconnector and won't add the sink to the array dc_link-
> > > > > remote_sinks[].
> > > >
> > > > When the issue that I mentioned before occurs, we won't be able to
> > > > release
> > > > the dc_sink
> > > > for the aconnector when we unplug the sst monitor. Hence, we can't
> > > > create
> > > > new dc_sink
> > > > for the latter on connected new stream sink of that port. Which leads to
> > > > two
> > > > issues
> > > > 1. Unplug monitor and plug in another monitor "to the same port"
> > > > =>  Since we use the same dc_sink to reflect the capability of the new
> > > > connected stream
> > > > sink, we might send out inappropriate stream to the new connected sink.
> > > > 2. Because we can't release dc_sink in the array dc_link-
> > > > >remote_sinks[],
> > > > when we
> > > > plug/unplug sst monitor to more than 4 different mst port, we will break
> > > > the
> > > > criteria
> > > > "dc_link->sink_count >= MAX_SINKS_PER_LINK" in
> > > > link_add_remote_sink_helper().
> > >
> > > Ok, a lot of this seems to be happening in amdgpu which certainly explains
> > > why
> > > I had so much trouble following along with this originally (also, having
> > > learned a bit more about how DC works definitely has helped a bit). I
> > > already
> > > see a bunch of issues though with how this code is structured that would
> > > definitely break things, though note I haven't sat down and tried this on
> > > a
> > > real machine yet - will try tomorrow.
> > >
> > > So - it seems like dc_sink == a bunch of hotplugging state for a dm
> > > connector,
> > > which actually tells me for one that this is definitely the wrong spot for
> > > this code. get_modes() really should just handle filling out DRM modes and
> > > pretty much nothing else, because callers from DRM aren't going to expect
> > > side-effects like this when get_modes() is called - which could
> > > potentially
> > > lead to all sorts of weirdness down the line if the DRM call paths that
> > > use
> > > this ever change. i915 and nouveau have good examples of what these
> > > functions
> > > should typically look like, and amdgpu actually seems to mostly follow
> > > this
> > > advice for it's other get_modes() callback.
> > >
> > > Note there's also another problem here: the assumption "no EDID ==
> > > disconnected" isn't correct. It's totally possible to run into EDID-less
> > > displays if the display is some ancient pre-historic relic, or if the ROM
> > > (or
> > > EEPROM? can't remember what type of chip computer displays tend to use...)
> > > chip
> > > in the monitor containing the EDID has died. Note that the second
> > > situation is
> > > suprisingly more common then one might think! I ran into a 140Hz 4k ASUS
> > > display where this happened, and I 

[PATCH] drm/amd/pm: fix the compile warning

2022-04-24 Thread Evan Quan
Fix the compile warning below:
drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/kv_dpm.c:1641
kv_get_acp_boot_level() warn: always true condition '(table->entries[i]->clk >= 
0) => (0-u32max >= 0)'

Reported-by: kernel test robot 
CC: Alex Deucher 
Signed-off-by: Evan Quan 
Change-Id: If4985252017023d6711b4d7eb1192a397baff813
---
 drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c 
b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
index 8b23cc9f098a..cab948118d4b 100644
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
@@ -1623,6 +1623,7 @@ static int kv_update_samu_dpm(struct amdgpu_device *adev, 
bool gate)
 
 static u8 kv_get_acp_boot_level(struct amdgpu_device *adev)
 {
+#if 0
u8 i;
struct amdgpu_clock_voltage_dependency_table *table =
>pm.dpm.dyn_state.acp_clock_voltage_dependency_table;
@@ -1636,6 +1637,8 @@ static u8 kv_get_acp_boot_level(struct amdgpu_device 
*adev)
i = table->count - 1;
 
return i;
+#endif
+   return 0;
 }
 
 static void kv_update_acp_boot_level(struct amdgpu_device *adev)
-- 
2.29.0



RE: [PATCH] drm/amd/display: fix if == else warning

2022-04-24 Thread Liu, Zhan
[AMD Official Use Only - General]

> -Original Message-
> From: Guo Zhengkui 
> Sent: 2022/April/24, Sunday 5:06 AM
> To: Wentland, Harry ; Li, Sun peng (Leo)
> ; Siqueira, Rodrigo ;
> Deucher, Alexander ; Koenig, Christian
> ; Pan, Xinhui ; David Airlie
> ; Daniel Vetter ; Liu, Charlene
> ; Lei, Jun ; Guo Zhengkui
> ; Liu, Zhan ; José Expósito
> ; open list:AMD DISPLAY CORE  g...@lists.freedesktop.org>; open list:DRM DRIVERS  de...@lists.freedesktop.org>; open list 
> Cc: zhengkui_...@outlook.com
> Subject: [PATCH] drm/amd/display: fix if == else warning
>
> Fix the following coccicheck warning:
>
> drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c:98:8-10:
> WARNING: possible condition with no effect (if == else)
>
> Signed-off-by: Guo Zhengkui 

Thanks a lot for fixing this warning.

Reviewed-by: Zhan Liu 

> ---
>  drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
> b/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
> index fe22530242d2..05b3fba9ccce 100644
> --- a/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
> +++ b/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_hwseq.c
> @@ -95,8 +95,6 @@ static void gpu_addr_to_uma(struct dce_hwseq *hwseq,
>   } else if (hwseq->fb_offset.quad_part <= addr->quad_part &&
>   addr->quad_part <= hwseq->uma_top.quad_part) {
>   is_in_uma = true;
> - } else if (addr->quad_part == 0) {
> - is_in_uma = false;
>   } else {
>   is_in_uma = false;
>   }
> --
> 2.20.1