Re: [PATCH] drm/amd/display: fix IPX enablement

2024-03-22 Thread Kazlauskas, Nicholas
[Public]

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


From: Mahfooz, Hamza 
Sent: Friday, March 22, 2024 2:56 PM
To: amd-gfx@lists.freedesktop.org 
Cc: Kazlauskas, Nicholas ; Li, Roman 
; Li, Sun peng (Leo) ; Wentland, Harry 
; Deucher, Alexander ; 
Limonciello, Mario ; Siqueira, Rodrigo 
; Mahfooz, Hamza ; Broadworth, 
Mark 
Subject: [PATCH] drm/amd/display: fix IPX enablement

We need to re-enable idle power optimizations after entering PSR. Since,
we get kicked out of idle power optimizations before entering PSR
(entering PSR requires us to write to DCN registers, which isn't allowed
while we are in IPS).

Fixes: bfe4f0b0e717 ("drm/amd/display: Add more checks for exiting idle in DC")
Tested-by: Mark Broadworth 
Signed-off-by: Hamza Mahfooz 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 8 +---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h | 2 +-
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
index a48a79e84e82..bfa090432ce2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
@@ -141,9 +141,8 @@ bool amdgpu_dm_link_setup_psr(struct dc_stream_state 
*stream)
  * amdgpu_dm_psr_enable() - enable psr f/w
  * @stream: stream state
  *
- * Return: true if success
  */
-bool amdgpu_dm_psr_enable(struct dc_stream_state *stream)
+void amdgpu_dm_psr_enable(struct dc_stream_state *stream)
 {
 struct dc_link *link = stream->link;
 unsigned int vsync_rate_hz = 0;
@@ -190,7 +189,10 @@ bool amdgpu_dm_psr_enable(struct dc_stream_state *stream)
 if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1)
 power_opt |= psr_power_opt_z10_static_screen;

-   return dc_link_set_psr_allow_active(link, &psr_enable, false, false, 
&power_opt);
+   dc_link_set_psr_allow_active(link, &psr_enable, false, false, 
&power_opt);
+
+   if (link->ctx->dc->caps.ips_support)
+   dc_allow_idle_optimizations(link->ctx->dc, true);
 }

 /*
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
index 6806b3c9c84b..1fdfd183c0d9 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
@@ -32,7 +32,7 @@
 #define AMDGPU_DM_PSR_ENTRY_DELAY 5

 void amdgpu_dm_set_psr_caps(struct dc_link *link);
-bool amdgpu_dm_psr_enable(struct dc_stream_state *stream);
+void amdgpu_dm_psr_enable(struct dc_stream_state *stream);
 bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream);
 bool amdgpu_dm_psr_disable(struct dc_stream_state *stream);
 bool amdgpu_dm_psr_disable_all(struct amdgpu_display_manager *dm);
--
2.44.0



Re: [PATCH] drm/amdgpu/display: flush the HDP when setting up DMCUB firmware

2022-04-28 Thread Kazlauskas, Nicholas
[Public]

This bug previously existed, and we have a solution in place for it.

The solution we picked was to force a stall through reading back the memory. 
You'll see this implemented in dmub_srv.c and the dmub_cmd.h header - through 
use of a volatile read over the region written. We do this for both the initial 
allocation for the cache windows and on every command submission to ensure 
DMCUB doesn't wakeup before the writes are in VRAM.

The issue on dGPU is the latency through the HDP path, but on APU the issue is 
out of order writes. We saw this problem on both DCN30/DCN21 when DMCUB was 
first introduced.

The writes we do happen within dmub_hw_init and on every command execution, but 
this patch adds the flush before HW init. I think the only issue this 
potentially fixes is the initial writeout in the SW PSP code to VRAM, but they 
already have flushes in place for that. The signature validation would cause 
firmware to fail to load if it wasn't at least.

So from a correctness perspective I don't think this patch causes any issue, 
but from a performance perspective this probably adds at least 100us to boot, 
if not more. My recommendation is to leave things as-is for now.

Regards,
Nicholas Kazlauskas

From: amd-gfx  on behalf of Alex Deucher 

Sent: Thursday, April 28, 2022 6:13 PM
To: amd-gfx@lists.freedesktop.org 
Cc: Deucher, Alexander 
Subject: [PATCH] drm/amdgpu/display: flush the HDP when setting up DMCUB 
firmware

When data is written to VRAM via the PCI BAR, the data goes
through a block called HDP which has a write queue and a
read cache.  When the driver writes to VRAM, it needs to flush
the HDP write queue to make sure all the data written has
actually hit VRAM.

When we write the DMCUB firmware to vram, we never flushed the
HDP.  In theory this could cause DMCUB errors if we try and
start the DMCUB firmware without making sure the data has hit
memory.

This doesn't fix any known issues, but is the right thing to do.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index a6c3e1d74124..5c1fd3a91cd5 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1133,6 +1133,10 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
 break;
 }

+   /* flush HDP */
+   mb();
+   amdgpu_device_flush_hdp(adev, NULL);
+
 status = dmub_srv_hw_init(dmub_srv, &hw_params);
 if (status != DMUB_STATUS_OK) {
 DRM_ERROR("Error initializing DMUB HW: %d\n", status);
--
2.35.1



RE: [RFC PATCH] drm/amd/display: dont ignore alpha property

2022-03-28 Thread Kazlauskas, Nicholas
[AMD Official Use Only]

> -Original Message-
> From: Melissa Wen 
> Sent: Friday, March 25, 2022 4:45 PM
> To: amd-gfx@lists.freedesktop.org; Wentland, Harry
> ; Deucher, Alexander
> ; Siqueira, Rodrigo
> ; Kazlauskas, Nicholas
> ; Gutierrez, Agustin
> ; Liu, Zhan 
> Cc: dri-de...@lists.freedesktop.org; Simon Ser 
> Subject: [RFC PATCH] drm/amd/display: dont ignore alpha property
> Importance: High
>
> Hi all,
>
> I'm examining the IGT kms_plane_alpha_blend test, specifically the
> alpha-7efc. It fails on AMD and Intel gen8 hw, but passes on Intel
> gen11. At first, I thought it was a rounding issue. In fact, it may be
> the problem for different results between intel hw generations.
>
> However, I changed the test locally to compare CRCs for all alpha values
> in the range before the test fails. Interestingly, it fails for all
> values when running on AMD, even when comparing planes with zero alpha
> (fully transparent). Moreover, I see the same CRC values regardless of
> the value set in the alpha property.
>
> To ensure that the blending mode is as expected, I explicitly set the
> Pre-multiplied blending mode in the test. Then I tried using different
> framebuffer data and alpha values. I've tried obvious comparisons too,
> such as fully opaque and fully transparent.
>
> As far as I could verify and understand, the value set for the ALPHA
> property is totally ignored by AMD drivers. I'm not sure if this is a
> matter of how we interpret the meaning of the premultiplied blend mode
> or the driver's assumptions about the data in these blend modes.
> For example, I made a change in the test as here:
> https://paste.debian.net/1235620/
> That basically means same framebuffer, but different alpha values for
> each plane. And the result was succesful (but I expected it fails).
>

The intent was that we don't enable global plane alpha along with anything that 
requires per pixel alpha.

The HW does have bits to specify a mode that's intended to work like this, but 
I don't think we've ever fully supported it in software.

I wouldn't necessarily expect that the blending result is correct, but maybe 
the IGT test result says otherwise.

> Besides that, I see that other subtests in kms_plane_alpha_blend are
> skipped, use "None" pixel blend mode, or are not changing the
> IGT_PLANE_ALPHA property. So, this alpha-7efc seems to be the only one
> in the subset that is checking changes on alpha property under a
> Pre-multiplied blend mode, and it is failing.
>
> I see some inputs in this issue:
> https://gitlab.freedesktop.org/drm/amd/-/issues/1769.
> But them, I guessed there are different interpretations for handling
> plane alpha in the pre-multiplied blend mode. Tbh, I'm not clear, but
> there's always a chance of different interpretations, and I don't have
> a third driver with CRC capabilities for further comparisons.
>
> I made some experiments on blnd_cfg values, changing alpha_mode vs
> global_gain and global_alpha. I think the expected behaviour for the
> Pre-multiplied blend mode is achieved by applying this RFC patch (for
> Cezanne).
>
> Does it seems reasonable? Can anyone help me with more inputs to guide
> me the right direction or point out what I misunderstood about these
> concepts?
>
> Thanks,
>
> Signed-off-by: Melissa Wen 
> ---
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 2 +-
>  drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 4 
>  2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 6633df7682ce..821ffafa441e 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -5438,7 +5438,7 @@ fill_blending_from_plane_state(const struct
> drm_plane_state *plane_state,
>
>   if (plane_state->alpha < 0x) {
>   *global_alpha = true;
> - *global_alpha_value = plane_state->alpha >> 8;
> + *global_alpha_value = plane_state->alpha;

Isn't the original behavior here correct? The value into DC should only be an 
8-bit value but we have 16-bit precision from the DRM property. This is 
truncating the bits that we don't support.

>   }
>  }
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
> b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
> index 4290eaf11a04..b4888f91a9d0 100644
> --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
> +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
> @@ -2367,6 +2367,10 @@ void dcn20_update_mpcc(str

Re: [PATCH] drm/amd/display: Cap pflip irqs per max otg number

2022-02-03 Thread Kazlauskas, Nicholas

On 2/3/2022 5:14 PM, roman...@amd.com wrote:

From: Roman Li 

[Why]
pflip interrupt order are mapped 1 to 1 to otg id.
e.g. if irq_src=26 corresponds to otg0 then 27->otg1, 28->otg2...

Linux DM registers pflip interrupts per number of crtcs.
In fused pipe case crtc numbers can be less than otg id.

e.g. if one pipe out of 3(otg#0-2) is fused adev->mode_info.num_crtc=2
so DM only registers irq_src 26,27.
This is a bug since if pipe#2 remains unfused DM never gets
otg2 pflip interrupt (irq_src=28)
That may results in gfx failure due to pflip timeout.

[How]
Register pflip interrupts per max num of otg instead of num_crtc

Signed-off-by: Roman Li 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
  drivers/gpu/drm/amd/display/dc/core/dc.c  | 2 ++
  drivers/gpu/drm/amd/display/dc/dc.h   | 1 +
  3 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 8f53c9f..10ca3fc 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3646,7 +3646,7 @@ static int dcn10_register_irq_handlers(struct 
amdgpu_device *adev)
  
  	/* Use GRPH_PFLIP interrupt */

for (i = DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT;
-   i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + 
adev->mode_info.num_crtc - 1;
+   i <= DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT + 
dc->caps.max_otg_num - 1;
i++) {
r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_DCE, i, 
&adev->pageflip_irq);
if (r) {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 1d9404f..70a0b89 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1220,6 +1220,8 @@ struct dc *dc_create(const struct dc_init_data 
*init_params)
  
  		dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
  
+		dc->caps.max_otg_num = dc->res_pool->res_cap->num_timing_generator;

+
if (dc->res_pool->dmcu != NULL)
dc->versions.dmcu_version = 
dc->res_pool->dmcu->dmcu_version;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index 69d264d..af05877 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -200,6 +200,7 @@ struct dc_caps {
bool edp_dsc_support;
bool vbios_lttpr_aware;
bool vbios_lttpr_enable;
+   uint32_t max_otg_num;
  };
  
  struct dc_bug_wa {




Re: [PATCH] drm/amd/display: Copy crc_skip_count when duplicating CRTC state

2022-01-18 Thread Kazlauskas, Nicholas

On 1/18/2022 11:40 AM, Rodrigo Siqueira wrote:

From: Leo Li 

[Why]
crc_skip_count is used to track how many frames to skip to allow the OTG
CRC engine to "warm up" before it outputs correct CRC values.
Experimentally, this seems to be 2 frames.

When duplicating CRTC states, this value was not copied to the
duplicated state. Therefore, when this state is committed, we will
needlessly wait 2 frames before outputing CRC values. Even if the CRC
engine is already warmed up. >
[How]
Copy the crc_skip_count as part of dm_crtc_duplicate_state.


This likely introduces regressions.

Here's an example case where it can take two frames even after the CRTC 
is enabled:


1. VUPDATE is before line 0, in the front porch, counter=0
2. Flip arrives before VUPDATE is signaled, but does not finish 
programming until after VUPDATE point, counter=0.

3. Vblank counter increments, counter=1.
4. Flip programming finishes, counter=1.
5. OS delay happens, cursor programming is delayed, counter=1.
6. Cursor programming starts, counter=1.
7. VUPDATE fires, updating frame but missing cursor, counter=1.
8. Cursor programming finishes, counter=2.
9. Cursor programming pending for counter=2.

This is a little contrived, but I've seen something similar happen 
during IGT testing before.


This is because cursor update happens independent of the rest of plane 
programming and is tied to a separate lock. That lock part can't change 
due to potential for stuttering, but the first part could be fixed.


Regards,
Nicholas Kazlauskas



Cc: Mark Yacoub 
Cc: Hayden Goodfellow 
Cc: Harry Wentland 
Cc: Nicholas Choi 

Signed-off-by: Leo Li 
Signed-off-by: Rodrigo Siqueira 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 87299e62fe12..5482b0925396 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -6568,6 +6568,7 @@ dm_crtc_duplicate_state(struct drm_crtc *crtc)
state->freesync_config = cur->freesync_config;
state->cm_has_degamma = cur->cm_has_degamma;
state->cm_is_degamma_srgb = cur->cm_is_degamma_srgb;
+   state->crc_skip_count = cur->crc_skip_count;
state->force_dpms_off = cur->force_dpms_off;
/* TODO Duplicate dc_stream after objects are stream object is 
flattened */
  




RE: [PATCH v5] drm/amd/display: Revert W/A for hard hangs on DCN20/DCN21

2022-01-14 Thread Kazlauskas, Nicholas
[Public]

> -Original Message-
> From: Limonciello, Mario 
> Sent: January 14, 2022 10:38 AM
> To: Chris Hixon ; Kazlauskas, Nicholas
> ; amd-gfx@lists.freedesktop.org
> Cc: Zhuo, Qingqing (Lillian) ; Scott Bruce
> ; spassw...@web.de
> Subject: RE: [PATCH v5] drm/amd/display: Revert W/A for hard hangs on
> DCN20/DCN21
> Importance: High
>
> [AMD Official Use Only]
>
> > >
> > >
> > >> I think the revert is fine once we figure out where we're missing calls 
> > >> to:
> > >>
> > >>  .optimize_pwr_state = dcn21_optimize_pwr_state,
> > >>  .exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
> > >>
> > >> These are already part of dc_link_detect, so I suspect there's another
> > interface
> > >> in DC that should be using these.
> > >>
> > >> I think the best way to debug this is to revert the patch locally and 
> > >> add a
> stack
> > >> dump when DMCUB hangs our times out.
> > > OK so I did this on top of amd-staging-drm-next with my v5 patch (this
> revert in
> > place)
> > >
> > > diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
> > b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
> > > index 9280f2abd973..0bd32f82f3db 100644
> > > --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
> > > +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
> > > @@ -789,8 +789,10 @@ enum dmub_status
> > dmub_srv_cmd_with_reply_data(struct dmub_srv *dmub,
> > >  // Execute command
> > >  status = dmub_srv_cmd_execute(dmub);
> > >
> > > -   if (status != DMUB_STATUS_OK)
> > > +   if (status != DMUB_STATUS_OK) {
> > > +   ASSERT(0);
> > >  return status;
> > > +   }
> > >
> > >  // Wait for DMUB to process command
> > >  status = dmub_srv_wait_for_idle(dmub, 10);
> > >
> > >> That way you can know where the PHY was trying to be accessed
> without the
> > >> refclk being on.
> > >>
> > >> We had a similar issue in DCN31 which didn't require a W/A like DCN21.
> > >>
> > >> I'd like to hold off on merging this until that hang is verified as gone.
> > >>
> > > Then I took a RN laptop running DMUB 0x01010019 and disabled eDP, and
> > confirmed
> > > no CRTC was configured but plugged in an HDMI cable:
> > >
> > > connector[78]: eDP-1
> > >  crtc=(null)
> > >  self_refresh_aware=0
> > > connector[85]: HDMI-A-1
> > >  crtc=crtc-1
> > >  self_refresh_aware=0
> > >
> > > I triggered 100 hotplugs like this:
> > >
> > > #!/bin/bash
> > > for i in {0..100..1}
> > > do
> > >  echo 1 | tee /sys/kernel/debug/dri/0/HDMI-A-1/trigger_hotplug
> > >  sleep 3
> > > done
> > >
> > > Unfortunately, no hang or traceback to be seen (and HDMI continues to
> work).
> > > I also manually pulled the plug a handful of times I don't know the
> specifics
> > that Lillian had the
> > > failure though, so this might not be a good enough check.
> > >
> > > I'll try to upgrade DMUB to 0x101001c (the latest version) and double
> check
> > that as well.
> >
> > I applied patch v5 and the above ASSERT patch, on top of both Linux
> > 5.16-rc8 and 5.16.
> >
> > Result: no problems with suspend/resume, 16+ cycles.
> >
> > As far as the hang goes:
> >
> > I plugged in an HDMI cable connected to my TV, and configured Gnome to
> > use the external display only.
> >
> > connectors from /sys/kernel/debug/dri/0/state:
> >
> > connector[78]: eDP-1
> >  crtc=(null)
> >  self_refresh_aware=0
> > connector[85]: HDMI-A-1
> >  crtc=crtc-1
> >  self_refresh_aware=0
> > connector[89]: DP-1
> >  crtc=(null)
> >  self_refresh_aware=0
> >
> > I manually unplugged/plugged the HDMI cable 16+ times, and also ran:
> >
> > $ sudo sh -c 'for ((i=0;i<100;i++)); do echo 1 | tee
> > /sys/kernel/debug/dri/0/HDMI-A-1/trigger_hotplug; sleep 3; done'
> >
> > The system did not hang, and I saw no kernel log output from the ASSERT.
> >
> > I also tried a USB-C dock with an HDMI port, with the same results,
> > though there are other issues with this (perhaps 

Re: [PATCH] drm/amd/display: reset dcn31 SMU mailbox on failures

2022-01-07 Thread Kazlauskas, Nicholas

On 2022-01-07 4:40 p.m., Mario Limonciello wrote:

Otherwise future commands may fail as well leading to downstream
problems that look like they stemmed from a timeout the first time
but really didn't.

Signed-off-by: Mario Limonciello 


I guess we used to do this but after we started adding the 
wait_for_response prior to sending the command this was ignored.


Should be fine.

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
index 8c2b77eb9459..162ae7186124 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c
@@ -119,6 +119,12 @@ int dcn31_smu_send_msg_with_param(
  
  	result = dcn31_smu_wait_for_response(clk_mgr, 10, 20);
  
+	if (result == VBIOSSMC_Result_Failed) {

+   ASSERT(0);
+   REG_WRITE(MP1_SMN_C2PMSG_91, VBIOSSMC_Result_OK);
+   return -1;
+   }
+
if (IS_SMU_TIMEOUT(result)) {
ASSERT(0);
dm_helpers_smu_timeout(CTX, msg_id, param, 10 * 20);




RE: [PATCH v5] drm/amd/display: Revert W/A for hard hangs on DCN20/DCN21

2022-01-07 Thread Kazlauskas, Nicholas
[AMD Official Use Only]

> -Original Message-
> From: Limonciello, Mario 
> Sent: January 7, 2022 11:50 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Limonciello, Mario ; Kazlauskas, Nicholas
> ; Zhuo, Qingqing (Lillian)
> ; Scott Bruce ; Chris
> Hixon ; spassw...@web.de
> Subject: [PATCH v5] drm/amd/display: Revert W/A for hard hangs on
> DCN20/DCN21
> Importance: High
>
> The WA from commit 2a50edbf10c8 ("drm/amd/display: Apply w/a for hard
> hang
> on HPD") and commit 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard
> hang on HPD to dcn20") causes a regression in s0ix where the system will
> fail to resume properly on many laptops.  Pull the workarounds out to
> avoid that s0ix regression in the common case.  This HPD hang happens with
> an external device and a new W/A will need to be developed for this in the
> future.
>
> Cc: Kazlauskas Nicholas 
> Cc: Qingqing Zhuo 
> Reported-by: Scott Bruce 
> Reported-by: Chris Hixon 
> Reported-by: spassw...@web.de
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=215436
> Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1821
> Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1852
> Fixes: 2a50edbf10c8 ("drm/amd/display: Apply w/a for hard hang on HPD")
> Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to
> dcn20")
> Signed-off-by: Mario Limonciello 

I think the revert is fine once we figure out where we're missing calls to:

.optimize_pwr_state = dcn21_optimize_pwr_state,
.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,

These are already part of dc_link_detect, so I suspect there's another 
interface in DC that should be using these.

I think the best way to debug this is to revert the patch locally and add a 
stack dump when DMCUB hangs our times out.

That way you can know where the PHY was trying to be accessed without the 
refclk being on.

We had a similar issue in DCN31 which didn't require a W/A like DCN21.

I'd like to hold off on merging this until that hang is verified as gone.

Regards,
Nicholas Kazlauskas

> ---
>  .../display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c  | 11 +---
>  .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 11 +---
>  .../display/dc/irq/dcn20/irq_service_dcn20.c  | 25 ---
>  .../display/dc/irq/dcn20/irq_service_dcn20.h  |  2 --
>  .../display/dc/irq/dcn21/irq_service_dcn21.c  | 25 ---
>  .../display/dc/irq/dcn21/irq_service_dcn21.h  |  2 --
>  .../gpu/drm/amd/display/dc/irq/irq_service.c  |  2 +-
>  .../gpu/drm/amd/display/dc/irq/irq_service.h  |  4 ---
>  8 files changed, 3 insertions(+), 79 deletions(-)
>
> diff --git
> a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
> b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
> index 9f35f2e8f971..cac80ba69072 100644
> --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
> +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
> @@ -38,7 +38,6 @@
>  #include "clk/clk_11_0_0_offset.h"
>  #include "clk/clk_11_0_0_sh_mask.h"
>
> -#include "irq/dcn20/irq_service_dcn20.h"
>
>  #undef FN
>  #define FN(reg_name, field_name) \
> @@ -223,8 +222,6 @@ void dcn2_update_clocks(struct clk_mgr
> *clk_mgr_base,
>   bool force_reset = false;
>   bool p_state_change_support;
>   int total_plane_count;
> - int irq_src;
> - uint32_t hpd_state;
>
>   if (dc->work_arounds.skip_clock_update)
>   return;
> @@ -242,13 +239,7 @@ void dcn2_update_clocks(struct clk_mgr
> *clk_mgr_base,
>   if (dc->res_pool->pp_smu)
>   pp_smu = &dc->res_pool->pp_smu->nv_funcs;
>
> - for (irq_src = DC_IRQ_SOURCE_HPD1; irq_src <=
> DC_IRQ_SOURCE_HPD6; irq_src++) {
> - hpd_state = dc_get_hpd_state_dcn20(dc->res_pool->irqs,
> irq_src);
> - if (hpd_state)
> - break;
> - }
> -
> - if (display_count == 0 && !hpd_state)
> + if (display_count == 0)
>   enter_display_off = true;
>
>   if (enter_display_off == safe_to_lower) {
> diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
> b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
> index fbda42313bfe..f4dee0e48a67 100644
> --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
> +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
> @@ -42,7 +42,6 @@
>  #include "clk/clk_10_0_2_sh_mask.h"
>  #include "renoir_ip_offset.h"
>
> -#include "irq/dcn21/irq_service_dcn21.h"
>
>  /* Constants */
>
> @@ -129,11

Re: [PATCH v1] drm/amd/display: Add Debugfs Entry to Force in SST Sequence

2021-12-07 Thread Kazlauskas, Nicholas

On 2021-12-07 1:55 p.m., Fangzhi Zuo wrote:

It is w/a to check DP2 SST behavior on M42d box.


Isn't this useful beyond just the m42d/dp2?

This should affect regular DP MST support I think. Adding this debug 
flag is okay I think, but I think the names should be updated (inline).




Signed-off-by: Fangzhi Zuo 
---
  .../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 27 +++
  1 file changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index 31c05eb5c64a..9590c0acba1f 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -3237,6 +3237,30 @@ static int disable_hpd_get(void *data, u64 *val)
  DEFINE_DEBUGFS_ATTRIBUTE(disable_hpd_ops, disable_hpd_get,
 disable_hpd_set, "%llu\n");
  
+/*

+ * w/a to force in SST mode for M42D DP2 receiver.
+ * Example usage: echo 1 > /sys/kernel/debug/dri/0/amdgpu_dm_dp2_force_sst
+ */
+static int dp2_force_sst_set(void *data, u64 val)
+{
+   struct amdgpu_device *adev = data;
+
+   adev->dm.dc->debug.set_mst_en_for_sst = val;
+
+   return 0;
+}
+
+static int dp2_force_sst_get(void *data, u64 *val)
+{
+   struct amdgpu_device *adev = data;
+
+   *val = adev->dm.dc->debug.set_mst_en_for_sst;
+
+   return 0;
+}
+DEFINE_DEBUGFS_ATTRIBUTE(dp2_force_sst_ops, dp2_force_sst_get,
+dp2_force_sst_set, "%llu\n");
+
  /*
   * Sets the DC visual confirm debug option from the given string.
   * Example usage: echo 1 > /sys/kernel/debug/dri/0/amdgpu_visual_confirm
@@ -3371,4 +3395,7 @@ void dtn_debugfs_init(struct amdgpu_device *adev)
debugfs_create_file_unsafe("amdgpu_dm_disable_hpd", 0644, root, adev,
   &disable_hpd_ops);
  
+	debugfs_create_file_unsafe("amdgpu_dm_dp2_force_sst", 0644, root, adev,

+   &dp2_force_sst_ops);


"amdgpu_dm_dp_set_mst_en_for_sst"

...might be a better name.

Regards,
Nicholas Kazlauskas


+
  }





Re: [PATCH v2] drm/amd/display: Use oriented source size when checking cursor scaling

2021-12-02 Thread Kazlauskas, Nicholas

On 2021-12-02 7:52 a.m., Vlad Zahorodnii wrote:

dm_check_crtc_cursor() doesn't take into account plane transforms when
calculating plane scaling, this can result in false positives.

For example, if there's an output with resolution 3840x2160 and the
output is rotated 90 degrees, CRTC_W and CRTC_H will be 3840 and 2160,
respectively, but SRC_W and SRC_H will be 2160 and 3840, respectively.

Since the cursor plane usually has a square buffer attached to it, the
dm_check_crtc_cursor() will think that there's a scale factor mismatch
even though there isn't really.

This fixes an issue where kwin fails to use hardware plane transforms.

Changes since version 1:
- s/orientated/oriented/g

Signed-off-by: Vlad Zahorodnii 


This looks correct to me. I guess it's also not modifying the actual 
programming position, just the check to ensure that the cursor is going 
to be unscaled in the correct orientation.


Would be good to have some IGT tests for these scaled cases to verify 
atomic check pass/fail assumptions, but for now:


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 35 ++-
  1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index a3c0f2e4f4c1..c009c668fbe2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -10736,6 +10736,24 @@ static int dm_update_plane_state(struct dc *dc,
return ret;
  }
  
+static void dm_get_oriented_plane_size(struct drm_plane_state *plane_state,

+  int *src_w, int *src_h)
+{
+   switch (plane_state->rotation & DRM_MODE_ROTATE_MASK) {
+   case DRM_MODE_ROTATE_90:
+   case DRM_MODE_ROTATE_270:
+   *src_w = plane_state->src_h >> 16;
+   *src_h = plane_state->src_w >> 16;
+   break;
+   case DRM_MODE_ROTATE_0:
+   case DRM_MODE_ROTATE_180:
+   default:
+   *src_w = plane_state->src_w >> 16;
+   *src_h = plane_state->src_h >> 16;
+   break;
+   }
+}
+
  static int dm_check_crtc_cursor(struct drm_atomic_state *state,
struct drm_crtc *crtc,
struct drm_crtc_state *new_crtc_state)
@@ -10744,6 +10762,8 @@ static int dm_check_crtc_cursor(struct drm_atomic_state 
*state,
struct drm_plane_state *new_cursor_state, *new_underlying_state;
int i;
int cursor_scale_w, cursor_scale_h, underlying_scale_w, 
underlying_scale_h;
+   int cursor_src_w, cursor_src_h;
+   int underlying_src_w, underlying_src_h;
  
  	/* On DCE and DCN there is no dedicated hardware cursor plane. We get a

 * cursor per pipe but it's going to inherit the scaling and
@@ -10755,10 +10775,9 @@ static int dm_check_crtc_cursor(struct 
drm_atomic_state *state,
return 0;
}
  
-	cursor_scale_w = new_cursor_state->crtc_w * 1000 /

-(new_cursor_state->src_w >> 16);
-   cursor_scale_h = new_cursor_state->crtc_h * 1000 /
-(new_cursor_state->src_h >> 16);
+   dm_get_oriented_plane_size(new_cursor_state, &cursor_src_w, 
&cursor_src_h);
+   cursor_scale_w = new_cursor_state->crtc_w * 1000 / cursor_src_w;
+   cursor_scale_h = new_cursor_state->crtc_h * 1000 / cursor_src_h;
  
  	for_each_new_plane_in_state_reverse(state, underlying, new_underlying_state, i) {

/* Narrow down to non-cursor planes on the same CRTC as the 
cursor */
@@ -10769,10 +10788,10 @@ static int dm_check_crtc_cursor(struct 
drm_atomic_state *state,
if (!new_underlying_state->fb)
continue;
  
-		underlying_scale_w = new_underlying_state->crtc_w * 1000 /

-(new_underlying_state->src_w >> 16);
-   underlying_scale_h = new_underlying_state->crtc_h * 1000 /
-(new_underlying_state->src_h >> 16);
+   dm_get_oriented_plane_size(new_underlying_state,
+  &underlying_src_w, 
&underlying_src_h);
+   underlying_scale_w = new_underlying_state->crtc_w * 1000 / 
underlying_src_w;
+   underlying_scale_h = new_underlying_state->crtc_h * 1000 / 
underlying_src_h;
  
  		if (cursor_scale_w != underlying_scale_w ||

cursor_scale_h != underlying_scale_h) {





Re: [PATCH v2] drm/amd/display: Add DP-HDMI FRL PCON Support in DC

2021-11-26 Thread Kazlauskas, Nicholas

On 2021-11-26 9:32 a.m., Fangzhi Zuo wrote:

Change since v1: add brief description
1. Add hdmi frl pcon support to existing asic family.
2. Determine pcon frl capability based on pcon dpcd.
3. pcon frl is taken into consideration into mode validation.

Signed-off-by: Fangzhi Zuo 


Reviewed-by: Nicholas Kazlauskas 

I think we probably should be using the DP DPCD defines directly instead 
our own unions, though. Maybe as a cleanup later.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/core/dc_link.c | 15 
  .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 71 +++
  drivers/gpu/drm/amd/display/dc/dc.h   |  6 ++
  drivers/gpu/drm/amd/display/dc/dc_dp_types.h  | 31 
  drivers/gpu/drm/amd/display/dc/dc_hw_types.h  |  3 +
  drivers/gpu/drm/amd/display/dc/dc_link.h  |  1 +
  drivers/gpu/drm/amd/display/dc/dc_types.h |  1 +
  .../drm/amd/display/dc/dcn20/dcn20_resource.c |  2 +
  .../drm/amd/display/dc/dcn21/dcn21_resource.c |  2 +
  .../drm/amd/display/dc/dcn30/dcn30_resource.c |  2 +
  .../drm/amd/display/dc/dcn31/dcn31_resource.c |  1 +
  11 files changed, 135 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 3d08f8eba402..dad7a4fdc427 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -2750,8 +2750,23 @@ static bool dp_active_dongle_validate_timing(
return false;
}
  
+#if defined(CONFIG_DRM_AMD_DC_DCN)

+   if (dongle_caps->dp_hdmi_frl_max_link_bw_in_kbps > 0) { // DP to HDMI 
FRL converter
+   struct dc_crtc_timing outputTiming = *timing;
+
+   if (timing->flags.DSC && !timing->dsc_cfg.is_frl)
+   /* DP input has DSC, HDMI FRL output doesn't have DSC, 
remove DSC from output timing */
+   outputTiming.flags.DSC = 0;
+   if (dc_bandwidth_in_kbps_from_timing(&outputTiming) > 
dongle_caps->dp_hdmi_frl_max_link_bw_in_kbps)
+   return false;
+   } else { // DP to HDMI TMDS converter
+   if (get_timing_pixel_clock_100hz(timing) > 
(dongle_caps->dp_hdmi_max_pixel_clk_in_khz * 10))
+   return false;
+   }
+#else
if (get_timing_pixel_clock_100hz(timing) > 
(dongle_caps->dp_hdmi_max_pixel_clk_in_khz * 10))
return false;
+#endif
  
  #if defined(CONFIG_DRM_AMD_DC_DCN)

}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 84f3545c3032..da1532356c07 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -4313,6 +4313,56 @@ static int translate_dpcd_max_bpc(enum 
dpcd_downstream_port_max_bpc bpc)
return -1;
  }
  
+#if defined(CONFIG_DRM_AMD_DC_DCN)

+uint32_t dc_link_bw_kbps_from_raw_frl_link_rate_data(uint8_t bw)
+{
+   switch (bw) {
+   case 0b001:
+   return 900;
+   case 0b010:
+   return 1800;
+   case 0b011:
+   return 2400;
+   case 0b100:
+   return 3200;
+   case 0b101:
+   return 4000;
+   case 0b110:
+   return 4800;
+   }
+
+   return 0;
+}
+
+/**
+ * Return PCON's post FRL link training supported BW if its non-zero, 
otherwise return max_supported_frl_bw.
+ */
+static uint32_t intersect_frl_link_bw_support(
+   const uint32_t max_supported_frl_bw_in_kbps,
+   const union hdmi_encoded_link_bw hdmi_encoded_link_bw)
+{
+   uint32_t supported_bw_in_kbps = max_supported_frl_bw_in_kbps;
+
+   // HDMI_ENCODED_LINK_BW bits are only valid if HDMI Link Configuration 
bit is 1 (FRL mode)
+   if (hdmi_encoded_link_bw.bits.FRL_MODE) {
+   if (hdmi_encoded_link_bw.bits.BW_48Gbps)
+   supported_bw_in_kbps = 4800;
+   else if (hdmi_encoded_link_bw.bits.BW_40Gbps)
+   supported_bw_in_kbps = 4000;
+   else if (hdmi_encoded_link_bw.bits.BW_32Gbps)
+   supported_bw_in_kbps = 3200;
+   else if (hdmi_encoded_link_bw.bits.BW_24Gbps)
+   supported_bw_in_kbps = 2400;
+   else if (hdmi_encoded_link_bw.bits.BW_18Gbps)
+   supported_bw_in_kbps = 1800;
+   else if (hdmi_encoded_link_bw.bits.BW_9Gbps)
+   supported_bw_in_kbps = 900;
+   }
+
+   return supported_bw_in_kbps;
+}
+#endif
+
  static void read_dp_device_vendor_id(struct dc_link *link)
  {
struct dp_device_vendor_id dp_id;
@@ -4424,6 +4474,27 @@ static void get_active_converter_info(
translate_dpcd_max_bpc(

hdmi_color_caps.bits.MAX_BITS_PER_COLOR_COMPONENT);
  

Re: [PATCH v1] drm/amd/display: Add DP-HDMI PCON SST Support

2021-11-24 Thread Kazlauskas, Nicholas

On 2021-11-24 12:28 p.m., Fangzhi Zuo wrote:

1. Parse DSC caps from PCON DPCD
2. Determine policy if decoding DSC at PCON
3. Enable/disable DSC at PCON

Signed-off-by: Fangzhi Zuo 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 41 +++
  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 13 +-
  2 files changed, 44 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 9a1ac657faa2..9dbf6bf3f1c3 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -6047,10 +6047,12 @@ static void update_dsc_caps(struct amdgpu_dm_connector 
*aconnector,
  
  	if (aconnector->dc_link && (sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT ||

sink->sink_signal == SIGNAL_TYPE_EDP)) {
-   dc_dsc_parse_dsc_dpcd(aconnector->dc_link->ctx->dc,
- 
aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.raw,
- 
aconnector->dc_link->dpcd_caps.dsc_caps.dsc_branch_decoder_caps.raw,
- dsc_caps);
+   if (sink->link->dpcd_caps.dongle_type == DISPLAY_DONGLE_NONE ||
+   sink->link->dpcd_caps.dongle_type == 
DISPLAY_DONGLE_DP_HDMI_CONVERTER)
+   dc_dsc_parse_dsc_dpcd(aconnector->dc_link->ctx->dc,
+   
aconnector->dc_link->dpcd_caps.dsc_caps.dsc_basic_caps.raw,
+   
aconnector->dc_link->dpcd_caps.dsc_caps.dsc_branch_decoder_caps.raw,
+   dsc_caps);
}
  }
  
@@ -6120,6 +6122,8 @@ static void apply_dsc_policy_for_stream(struct amdgpu_dm_connector *aconnector,

uint32_t link_bandwidth_kbps;
uint32_t max_dsc_target_bpp_limit_override = 0;
struct dc *dc = sink->ctx->dc;
+   uint32_t max_supported_bw_in_kbps, timing_bw_in_kbps;
+   uint32_t dsc_max_supported_bw_in_kbps;
  
  	link_bandwidth_kbps = dc_link_bandwidth_kbps(aconnector->dc_link,


dc_link_get_link_cap(aconnector->dc_link));
@@ -6138,16 +6142,37 @@ static void apply_dsc_policy_for_stream(struct 
amdgpu_dm_connector *aconnector,
apply_dsc_policy_for_edp(aconnector, sink, stream, dsc_caps, 
max_dsc_target_bpp_limit_override);
  
  	} else if (aconnector->dc_link && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT) {

-
-   if 
(dc_dsc_compute_config(aconnector->dc_link->ctx->dc->res_pool->dscs[0],
+   if (sink->link->dpcd_caps.dongle_type == DISPLAY_DONGLE_NONE) {
+   if 
(dc_dsc_compute_config(aconnector->dc_link->ctx->dc->res_pool->dscs[0],
dsc_caps,

aconnector->dc_link->ctx->dc->debug.dsc_min_slice_height_override,

max_dsc_target_bpp_limit_override,
link_bandwidth_kbps,
&stream->timing,
&stream->timing.dsc_cfg)) {
-   stream->timing.flags.DSC = 1;
-   DRM_DEBUG_DRIVER("%s: [%s] DSC is selected from SST RX\n", 
__func__, drm_connector->name);
+   stream->timing.flags.DSC = 1;
+   DRM_DEBUG_DRIVER("%s: [%s] DSC is selected from SST 
RX\n",
+__func__, 
drm_connector->name);
+   }
+   } else if (sink->link->dpcd_caps.dongle_type == 
DISPLAY_DONGLE_DP_HDMI_CONVERTER) {
+   timing_bw_in_kbps = 
dc_bandwidth_in_kbps_from_timing(&stream->timing);
+   max_supported_bw_in_kbps = link_bandwidth_kbps;
+   dsc_max_supported_bw_in_kbps = link_bandwidth_kbps;
+
+   if (timing_bw_in_kbps > max_supported_bw_in_kbps &&
+   max_supported_bw_in_kbps > 0 &&
+   dsc_max_supported_bw_in_kbps > 0)
+   if 
(dc_dsc_compute_config(aconnector->dc_link->ctx->dc->res_pool->dscs[0],
+   dsc_caps,
+   
aconnector->dc_link->ctx->dc->debug.dsc_min_slice_height_override,
+   
max_dsc_target_bpp_limit_override,
+   dsc_max_supported_bw_in_kbps,
+   &stream->timing,
+   &stream->timing.dsc_cfg)) {
+   stream->timi

Re: [PATCH] drm/amd/display: Reduce dmesg error to a debug print

2021-11-12 Thread Kazlauskas, Nicholas

On 2021-11-12 10:56 a.m., Leo (Hanghong) Ma wrote:

[Why & How]
Dmesg errors are found on dcn3.1 during reset test, but it's not
a really failure. So reduce it to a debug print.

Signed-off-by: Leo (Hanghong) Ma 


This is expected to occur on displays that aren't connected/don't 
support LTTPR so this is fine.


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index cb7bf9148904..c7785e29b1c0 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -4454,7 +4454,7 @@ bool dp_retrieve_lttpr_cap(struct dc_link *link)
lttpr_dpcd_data,
sizeof(lttpr_dpcd_data));
if (status != DC_OK) {
-   dm_error("%s: Read LTTPR caps data failed.\n", 
__func__);
+   DC_LOG_DP2("%s: Read LTTPR caps data failed.\n", 
__func__);
return false;
}
  





Re: [PATCH] drm/amdgpu/display: fix build when CONFIG_DRM_AMD_DC_DCN is not set

2021-10-28 Thread Kazlauskas, Nicholas

On 2021-10-28 10:46 a.m., Alex Deucher wrote:

Ping

On Wed, Oct 27, 2021 at 6:40 PM Alex Deucher  wrote:


Need to guard some things with CONFIG_DRM_AMD_DC_DCN.

Fixes: 0c865d1d817b77 ("drm/amd/display: fix link training regression for 1 or 2 
lane")
Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 11 +++
  1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index a9e940bd7e83..49a4d8e85bf8 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -840,9 +840,11 @@ static void override_lane_settings(const struct 
link_training_settings *lt_setti
 uint32_t lane;

 if (lt_settings->voltage_swing == NULL &&
-   lt_settings->pre_emphasis == NULL &&
-   lt_settings->ffe_preset == NULL &&
-   lt_settings->post_cursor2 == NULL)
+   lt_settings->pre_emphasis == NULL &&
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+   lt_settings->ffe_preset == NULL &&
+#endif
+   lt_settings->post_cursor2 == NULL)

 return;

@@ -853,9 +855,10 @@ static void override_lane_settings(const struct 
link_training_settings *lt_setti
 lane_settings[lane].PRE_EMPHASIS = 
*lt_settings->pre_emphasis;
 if (lt_settings->post_cursor2)
 lane_settings[lane].POST_CURSOR2 = 
*lt_settings->post_cursor2;
-
+#if defined(CONFIG_DRM_AMD_DC_DCN)
 if (lt_settings->ffe_preset)
 lane_settings[lane].FFE_PRESET = 
*lt_settings->ffe_preset;
+#endif
 }
  }

--
2.31.1





Re: [PATCH] drm/amdgpu/display: fix build when CONFIG_DRM_AMD_DC_DCN is not set

2021-10-28 Thread Kazlauskas, Nicholas

On 2021-10-28 10:46 a.m., Alex Deucher wrote:

ping

On Wed, Oct 27, 2021 at 6:40 PM Alex Deucher  wrote:


Need to guard some things with CONFIG_DRM_AMD_DC_DCN.

Fixes: 707021dc0e16f6 ("drm/amd/display: Enable dpia in dmub only for DCN31 B0")
Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Though this whole function could be guarded by DCN, DMUB doesn't exist 
on DCE.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 3f36dbb2c663..6dd6262f2769 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1108,7 +1108,9 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
 case CHIP_YELLOW_CARP:
 if (dc->ctx->asic_id.hw_internal_rev != YELLOW_CARP_A0) {
 hw_params.dpia_supported = true;
+#if defined(CONFIG_DRM_AMD_DC_DCN)
 hw_params.disable_dpia = 
dc->debug.dpia_debug.bits.disable_dpia;
+#endif
 }
 break;
 default:
--
2.31.1





Re: [PATCH] drm/amd/display: Fix error handling on waiting for completion

2021-10-26 Thread Kazlauskas, Nicholas

On 2021-10-26 11:51 a.m., Michel Dänzer wrote:

On 2021-10-26 13:07, Stylon Wang wrote:

[Why]
In GNOME Settings->Display the switching from mirror mode to single display
occasionally causes wait_for_completion_interruptible_timeout() to return
-ERESTARTSYS and fails atomic check.

[How]
Replace the call with wait_for_completion_timeout() since the waiting for
hw_done and flip_done completion doesn't need to worry about interruption
from signal.

Signed-off-by: Stylon Wang 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 4cd64529b180..b8f4ff323de1 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -9844,10 +9844,10 @@ static int do_aquire_global_lock(struct drm_device *dev,
 * Make sure all pending HW programming completed and
 * page flips done
 */
-   ret = 
wait_for_completion_interruptible_timeout(&commit->hw_done, 10*HZ);
+   ret = wait_for_completion_timeout(&commit->hw_done, 10*HZ);
  
  		if (ret > 0)

-   ret = wait_for_completion_interruptible_timeout(
+   ret = wait_for_completion_timeout(
&commit->flip_done, 10*HZ);
  
  		if (ret == 0)




The *_interruptible_* variant is needed so that the display manager process can 
be killed while it's waiting here, which could take up to 10 seconds (per the 
timeout).

What's the problem with -ERESTARTSYS? Either the ioctl should be restarted 
automatically, or if it bounces back to user space, that needs to be able to 
retry the ioctl while it returns -1 and errno == EINTR. drmIoctl handles this 
transparently.




Thanks for the insight Michel!

If it's just an error in the log without a functional issue then maybe 
we should downgrade it to a debug statement in the case where it returns 
-ERESTARTSYS.


If this is a functional issue (DRM not automatically retrying the 
commit?) then maybe we should take a deeper look into the IOCTL itself.


Regards,
Nicholas Kazlauskas



Re: [PATCH] drm/amd/display: Fix error handling on waiting for completion

2021-10-26 Thread Kazlauskas, Nicholas

On 2021-10-26 7:07 a.m., Stylon Wang wrote:

[Why]
In GNOME Settings->Display the switching from mirror mode to single display
occasionally causes wait_for_completion_interruptible_timeout() to return
-ERESTARTSYS and fails atomic check.

[How]
Replace the call with wait_for_completion_timeout() since the waiting for
hw_done and flip_done completion doesn't need to worry about interruption
from signal.

Signed-off-by: Stylon Wang 


I think this is okay, but I'll write out how I think these work here in 
case anyone has corrections.


Both variants allow the thread to sleep, but the interruptible variant 
can waken due to signals. These signals are a secondary wakeup event and 
would require use to restart the wait and (probably) keep track of how 
long we were waiting before.


We want wakeup only on completion, so we should be using the 
`wait_for_completion_timeout()` variants instead in most (if not all?) 
cases in our display driver.


This probably has some nuances that matter more for different variants 
of UAPI, but with this understanding I think this is:


Reviewed-by: Nicholas Kazlauskas 

Now, if we could revive that patch series I had from the other year and 
outright drop `do_aquire_global_lock()`...


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 4cd64529b180..b8f4ff323de1 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -9844,10 +9844,10 @@ static int do_aquire_global_lock(struct drm_device *dev,
 * Make sure all pending HW programming completed and
 * page flips done
 */
-   ret = 
wait_for_completion_interruptible_timeout(&commit->hw_done, 10*HZ);
+   ret = wait_for_completion_timeout(&commit->hw_done, 10*HZ);
  
  		if (ret > 0)

-   ret = wait_for_completion_interruptible_timeout(
+   ret = wait_for_completion_timeout(
&commit->flip_done, 10*HZ);
  
  		if (ret == 0)






Re: [PATCH 32/33] drm/amd/display: fix link training regression for 1 or 2 lane

2021-10-25 Thread Kazlauskas, Nicholas

On 2021-10-25 9:58 a.m., Harry Wentland wrote:



On 2021-10-25 07:25, Paul Menzel wrote:

Dear Wenjing, dear Rodrigo,


On 24.10.21 15:31, Rodrigo Siqueira wrote:

From: Wenjing Liu 

[why]
We have a regression that cause maximize lane settings to use
uninitialized data from unused lanes.


Which commit caused the regression? Please amend the commit message.


This will cause link training to fail for 1 or 2 lanes because the lane
adjust is populated incorrectly sometimes.


On what card did you test this, and how can it be reproduced?

Please describe the fix/implemantation in the commit message.


Reviewed-by: Eric Yang 
Acked-by: Rodrigo Siqueira 
Signed-off-by: Wenjing Liu 
---
   .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 35 +--
   1 file changed, 32 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 653279ab96f4..f6ba7c734f54 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -108,6 +108,9 @@ static struct dc_link_settings 
get_common_supported_link_settings(
   struct dc_link_settings link_setting_b);
   static void maximize_lane_settings(const struct link_training_settings 
*lt_settings,
   struct dc_lane_settings lane_settings[LANE_COUNT_DP_MAX]);
+static void override_lane_settings(const struct link_training_settings 
*lt_settings,
+    struct dc_lane_settings lane_settings[LANE_COUNT_DP_MAX]);
+
   static uint32_t get_cr_training_aux_rd_interval(struct dc_link *link,
   const struct dc_link_settings *link_settings)
   {
@@ -734,15 +737,13 @@ void dp_decide_lane_settings(
   }
   #endif
   }
-
-    /* we find the maximum of the requested settings across all lanes*/
-    /* and set this maximum for all lanes*/
   dp_hw_to_dpcd_lane_settings(lt_settings, hw_lane_settings, 
dpcd_lane_settings);
     if (lt_settings->disallow_per_lane_settings) {
   /* we find the maximum of the requested settings across all lanes*/
   /* and set this maximum for all lanes*/
   maximize_lane_settings(lt_settings, hw_lane_settings);
+    override_lane_settings(lt_settings, hw_lane_settings);
     if (lt_settings->always_match_dpcd_with_hw_lane_settings)
   dp_hw_to_dpcd_lane_settings(lt_settings, hw_lane_settings, 
dpcd_lane_settings);
@@ -833,6 +834,34 @@ static void maximize_lane_settings(const struct 
link_training_settings *lt_setti
   }
   }
   +static void override_lane_settings(const struct link_training_settings 
*lt_settings,
+    struct dc_lane_settings lane_settings[LANE_COUNT_DP_MAX])
+{
+    uint32_t lane;
+
+    if (lt_settings->voltage_swing == NULL &&
+    lt_settings->pre_emphasis == NULL &&
+#if defined(CONFIG_DRM_AMD_DC_DP2_0)
+    lt_settings->ffe_preset == NULL &&
+#endif
+    lt_settings->post_cursor2 == NULL)
+
+    return;
+
+    for (lane = 1; lane < LANE_COUNT_DP_MAX; lane++) {
+    if (lt_settings->voltage_swing)
+    lane_settings[lane].VOLTAGE_SWING = *lt_settings->voltage_swing;
+    if (lt_settings->pre_emphasis)
+    lane_settings[lane].PRE_EMPHASIS = *lt_settings->pre_emphasis;
+    if (lt_settings->post_cursor2)
+    lane_settings[lane].POST_CURSOR2 = *lt_settings->post_cursor2;
+#if defined(CONFIG_DRM_AMD_DC_DP2_0)
+    if (lt_settings->ffe_preset)
+    lane_settings[lane].FFE_PRESET = *lt_settings->ffe_preset;
+#endif


Normally these checks should be done in C and not the preprocessor. `if 
CONFIG(DRM_AMD_DC_DP2_0)` or similar should work.



Interesting. I've never seen this before. Do you have an example or link to a 
doc? A cursory search doesn't yield any results but I might not be searching 
for the right thing.

Harry


I'm curious about this too. The compiler with optimizations should 
remove the constant check, but technically the C standard only permits 
it - it doesn't guarantee that it happens.


However, this patch should actually be changed to drop these 
CONFIG_DRM_AMD_DC_DP2_0 guards - this isn't a Kconfig option nor will 
there be one specifically for DP2. This should be folded under the DCN 
support.


Regards,
Nicholas Kazlauskas




+    }
+}
+
   enum dc_status dp_get_lane_status_and_lane_adjust(
   struct dc_link *link,
   const struct link_training_settings *link_training_setting,




Kind regards,

Paul






Re: [PATCH] drm/amdgpu/display: add yellow carp B0 with rest of driver

2021-10-20 Thread Kazlauskas, Nicholas

On 2021-10-20 9:53 a.m., Alex Deucher wrote:

Fix revision id.

Fixes: 626cbb641f1052 ("drm/amdgpu: support B0&B1 external revision id for yellow 
carp")
Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/include/dal_asic_id.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h 
b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
index a9974f12f7fb..e4a2dfacab4c 100644
--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
@@ -228,7 +228,7 @@ enum {
  #define FAMILY_YELLOW_CARP 146
  
  #define YELLOW_CARP_A0 0x01

-#define YELLOW_CARP_B0 0x1A
+#define YELLOW_CARP_B0 0x20
  #define YELLOW_CARP_UNKNOWN 0xFF
  
  #ifndef ASICREV_IS_YELLOW_CARP






Re: [PATCH 16/27] drm/amd/display: increase Z9 latency to workaround underflow in Z9

2021-10-18 Thread Kazlauskas, Nicholas

On 2021-10-15 7:53 p.m., Mike Lothian wrote:

This patch seems to change z8 - not that I know what z8 or z9 are


It's a little misleading but the patch and terminology is correct.

Z9 is the usecase for these watermarks even if the calculation is shared 
with Z8/Z9.


Regards,
Nicholas Kazlauskas



On Fri, 15 Oct 2021 at 19:44, Agustin Gutierrez
 wrote:


From: Eric Yang 

[Why]
Z9 latency is higher than when we originally tuned the watermark
parameters, causing underflow. Increasing the value until the latency
issues is resolved.

Reviewed-by: Nicholas Kazlauskas 
Acked-by: Agustin Gutierrez Sanchez 
Signed-off-by: Eric Yang 
---
  drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
index c9d3d691f4c6..12ebd9f8912f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
@@ -222,8 +222,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_1_soc = {
 .num_states = 5,
 .sr_exit_time_us = 9.0,
 .sr_enter_plus_exit_time_us = 11.0,
-   .sr_exit_z8_time_us = 402.0,
-   .sr_enter_plus_exit_z8_time_us = 520.0,
+   .sr_exit_z8_time_us = 442.0,
+   .sr_enter_plus_exit_z8_time_us = 560.0,
 .writeback_latency_us = 12.0,
 .dram_channel_width_bytes = 4,
 .round_trip_ping_latency_dcfclk_cycles = 106,
--
2.25.1





Re: [PATCH] drm/amd/display: Enable PSR by default on DCN3.1

2021-10-12 Thread Kazlauskas, Nicholas

On 2021-10-11 1:04 a.m., Vishwakarma, Pratik wrote:


On 10/8/2021 9:44 PM, Nicholas Kazlauskas wrote:

[Why]
New idle optimizations for DCN3.1 require PSR for optimal power savings
on panels that support it.

This was previously left disabled by default because of issues with
compositors that do not pageflip and scan out directly to the
frontbuffer.

For these compositors we now have detection methods that wait for x
number of pageflips after a full update - triggered by a buffer or
format change typically.

This may introduce bugs or new cases not tested by users so this is
only currently targeting DCN31.

[How]
Add code in DM to set PSR state by default for DCN3.1 while falling
back to the feature mask for older DCN.

Add a global debug flag that can be set to disable it for either.

Cc: Harry Wentland
Cc: Roman Li
Signed-off-by: Nicholas Kazlauskas
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c   | 17 -
  drivers/gpu/drm/amd/include/amd_shared.h|  5 +++--
  2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index dc595ecec595..ff545503a6ed 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -4031,6 +4031,7 @@ static int amdgpu_dm_initialize_drm_device(struct 
amdgpu_device *adev)
int32_t primary_planes;
enum dc_connection_type new_connection_type = dc_connection_none;
const struct dc_plane_cap *plane;
+   bool psr_feature_enabled = false;
  
  	dm->display_indexes_num = dm->dc->caps.max_streams;

/* Update the actual used number of crtc */
@@ -4113,6 +4114,19 @@ static int amdgpu_dm_initialize_drm_device(struct 
amdgpu_device *adev)
DRM_DEBUG_KMS("Unsupported DCN IP version for outbox: 0x%X\n",
  adev->ip_versions[DCE_HWIP][0]);
}
+
+   /* Determine whether to enable PSR support by default. */
+   if (!(amdgpu_dc_debug_mask & DC_DISABLE_PSR)) {
+   switch (adev->ip_versions[DCE_HWIP][0]) {
+   case IP_VERSION(3, 1, 2):
+   case IP_VERSION(3, 1, 3):
+   psr_feature_enabled = true;
+   break;
+   default:
+   psr_feature_enabled = amdgpu_dc_feature_mask & 
DC_PSR_MASK;
+   break;
+   }
+   }
  #endif
  
  	/* loops over all connectors on the board */

@@ -4156,7 +4170,8 @@ static int amdgpu_dm_initialize_drm_device(struct 
amdgpu_device *adev)
} else if (dc_link_detect(link, DETECT_REASON_BOOT)) {
amdgpu_dm_update_connector_after_detect(aconnector);
register_backlight_device(dm, link);
-   if (amdgpu_dc_feature_mask & DC_PSR_MASK)
+
+   if (psr_feature_enabled)
amdgpu_dm_set_psr_caps(link);
}
  
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h

index 257f280d3d53..f1a46d16f7ea 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -228,7 +228,7 @@ enum DC_FEATURE_MASK {
DC_FBC_MASK = (1 << 0), //0x1, disabled by default
DC_MULTI_MON_PP_MCLK_SWITCH_MASK = (1 << 1), //0x2, enabled by default
DC_DISABLE_FRACTIONAL_PWM_MASK = (1 << 2), //0x4, disabled by default
-   DC_PSR_MASK = (1 << 3), //0x8, disabled by default
+   DC_PSR_MASK = (1 << 3), //0x8, disabled by default for dcn < 3.1
DC_EDP_NO_POWER_SEQUENCING = (1 << 4), //0x10, disabled by default
  };
  
@@ -236,7 +236,8 @@ enum DC_DEBUG_MASK {

DC_DISABLE_PIPE_SPLIT = 0x1,
DC_DISABLE_STUTTER = 0x2,
DC_DISABLE_DSC = 0x4,
-   DC_DISABLE_CLOCK_GATING = 0x8
+   DC_DISABLE_CLOCK_GATING = 0x8,
+   DC_DISABLE_PSR = 0x10,


Don't we need a corresponding check in amdgpu_dm_init() to disable PSR 
in runtime?


The check is `if (psr_feature_enabled)` above.


Also, how does it handle conflicting declarations from feature mask and 
debug mask?


Feature enable mask is used for older ASIC to allow PSR to be enabled.

For both old and new ASIC the DISABLE mask takes priority as a debug 
option for disabling PSR support.


Regards,
Nicholas Kazlauskas



/BR
/

/Pratik
/


  };
  
  enum amd_dpm_forced_level;




Re: [PATCH] drm/amd/display: Fix white screen page fault for gpuvm

2021-09-13 Thread Kazlauskas, Nicholas

On 2021-09-13 3:13 p.m., Alex Deucher wrote:

Acked-by: Alex Deucher 

Can you add a fixes: tag?

Alex


Sure, I think the relevant patch is:

Fixes: 64b1d0e8d50 ("drm/amd/display: Add DCN3.1 HWSEQ")

Regards,
Nicholas Kazlauskas



On Mon, Sep 13, 2021 at 3:11 PM Nicholas Kazlauskas
 wrote:


[Why]
The "base_addr_is_mc_addr" field was added for dcn3.1 support but
pa_config was never updated to set it to false.

Uninitialized memory causes it to be set to true which results in
address mistranslation and white screen.

[How]
Use memset to ensure all fields are initialized to 0 by default.

Cc: Aaron Liu 
Signed-off-by: Nicholas Kazlauskas 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 53363728dbb..b0426bb3f2e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1125,6 +1125,8 @@ static void mmhub_read_system_context(struct 
amdgpu_device *adev, struct dc_phy_
 uint32_t agp_base, agp_bot, agp_top;
 PHYSICAL_ADDRESS_LOC page_table_start, page_table_end, page_table_base;

+   memset(pa_config, 0, sizeof(*pa_config));
+
 logical_addr_low  = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;
 pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);

--
2.25.1





Re: [PATCH] amd/display: downgrade validation failure log level

2021-09-07 Thread Kazlauskas, Nicholas

On 2021-09-07 10:19 a.m., Simon Ser wrote:

In amdgpu_dm_atomic_check, dc_validate_global_state is called. On
failure this logs a warning to the kernel journal. However warnings
shouldn't be used for atomic test-only commit failures: user-space
might be perfoming a lot of atomic test-only commits to find the
best hardware configuration.

Downgrade the log to a regular DRM atomic message. While at it, use
the new device-aware logging infrastructure.

This fixes error messages in the kernel when running gamescope [1].

[1]: https://github.com/Plagman/gamescope/issues/245

Signed-off-by: Simon Ser 
Cc: Alex Deucher 
Cc: Harry Wentland 
Cc: Nicholas Kazlauskas 


Makes sense since validation can fail. Thanks for the patch!

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 986c9d29d686..6f3b6f2a952c 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -10467,7 +10467,8 @@ static int amdgpu_dm_atomic_check(struct drm_device 
*dev,
goto fail;
status = dc_validate_global_state(dc, dm_state->context, false);
if (status != DC_OK) {
-   DC_LOG_WARNING("DC global validation failure: %s (%d)",
+   drm_dbg_atomic(dev,
+  "DC global validation failure: %s (%d)",
   dc_status_to_str(status), status);
ret = -EINVAL;
goto fail;





Re: [PATCH 3/7] drm/amd/display: Use vblank control events for PSR enable/disable

2021-09-07 Thread Kazlauskas, Nicholas

On 2021-09-04 10:36 a.m., Mike Lothian wrote:

Hi

This patch is causing issues on my PRIME system

I've opened 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1700&data=04%7C01%7Cnicholas.kazlauskas%40amd.com%7Cd230db90a08d4b53011508d96fb15eb4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637663629862006687%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=fH8mvhyHDchXqxZ5dlsyp7KqrzxkuymV%2BHtBmzVUD3I%3D&reserved=0
 to track

Cheers

Mike


We don't create the workqueue on headless configs so I guess all the 
instances of flush need to be guarded with NULL checks first.


Thanks for reporting this!

Regards,
Nicholas Kazlauskas




On Fri, 13 Aug 2021 at 07:35, Wayne Lin  wrote:


From: Nicholas Kazlauskas 

[Why]
PSR can disable the HUBP along with the OTG when PSR is active.

We'll hit a pageflip timeout when the OTG is disable because we're no
longer updating the CRTC vblank counter and the pflip high IRQ will
not fire on the flip.

In order to flip the page flip timeout occur we should modify the
enter/exit conditions to match DRM requirements.

[How]
Use our deferred handlers for DRM vblank control to notify DMCU(B)
when it can enable or disable PSR based on whether vblank is disabled or
enabled respectively.

We'll need to pass along the stream with the notification now because
we want to access the CRTC state while the CRTC is locked to get the
stream state prior to the commit.

Retain a reference to the stream so it remains safe to continue to
access and release that reference once we're done with it.

Enable/disable logic follows what we were previously doing in
update_planes.

The workqueue has to be flushed before programming streams or planes
to ensure that we exit out of idle optimizations and PSR before
these events occur if necessary.

To keep the skip count logic the same to avoid FBCON PSR enablement
requires copying the allow condition onto the DM IRQ parameters - a
field that we can actually access from the worker.

Reviewed-by: Roman Li 
Acked-by: Wayne Lin 
Signed-off-by: Nicholas Kazlauskas 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 48 +++
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |  2 +
  .../display/amdgpu_dm/amdgpu_dm_irq_params.h  |  1 +
  3 files changed, 43 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f88b6c5b83cd..cebd663b6708 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1061,7 +1061,22 @@ static void vblank_control_worker(struct work_struct 
*work)

 DRM_DEBUG_KMS("Allow idle optimizations (MALL): %d\n", 
dm->active_vblank_irq_count == 0);

+   /* Control PSR based on vblank requirements from OS */
+   if (vblank_work->stream && vblank_work->stream->link) {
+   if (vblank_work->enable) {
+   if 
(vblank_work->stream->link->psr_settings.psr_allow_active)
+   amdgpu_dm_psr_disable(vblank_work->stream);
+   } else if (vblank_work->stream->link->psr_settings.psr_feature_enabled 
&&
+  !vblank_work->stream->link->psr_settings.psr_allow_active 
&&
+  vblank_work->acrtc->dm_irq_params.allow_psr_entry) {
+   amdgpu_dm_psr_enable(vblank_work->stream);
+   }
+   }
+
 mutex_unlock(&dm->dc_lock);
+
+   dc_stream_release(vblank_work->stream);
+
 kfree(vblank_work);
  }

@@ -6018,6 +6033,11 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
 work->acrtc = acrtc;
 work->enable = enable;

+   if (acrtc_state->stream) {
+   dc_stream_retain(acrtc_state->stream);
+   work->stream = acrtc_state->stream;
+   }
+
 queue_work(dm->vblank_control_workqueue, &work->work);
  #endif

@@ -8623,6 +8643,12 @@ static void amdgpu_dm_commit_planes(struct 
drm_atomic_state *state,
 /* Update the planes if changed or disable if we don't have any. */
 if ((planes_count || acrtc_state->active_planes == 0) &&
 acrtc_state->stream) {
+   /*
+* If PSR or idle optimizations are enabled then flush out
+* any pending work before hardware programming.
+*/
+   flush_workqueue(dm->vblank_control_workqueue);
+
 bundle->stream_update.stream = acrtc_state->stream;
 if (new_pcrtc_state->mode_changed) {
 bundle->stream_update.src = acrtc_state->stream->src;
@@ -8691,16 +8717,20 @@ static void amdgpu_dm_commit_planes(struct 
drm_atomic_state *state,
 acrtc_state->stream->link->psr_settings.psr_version != 
DC_PSR_VERSION_UNSUPPORTED &&
   

Re: [PATCH v2] drm/amd/display: Fix two cursor duplication when using overlay

2021-08-24 Thread Kazlauskas, Nicholas

On 2021-08-24 9:59 a.m., Simon Ser wrote:

Hi Rodrigo!

Thanks a lot for your reply! Comments below, please bear with me: I'm
a bit familiar with the cursor issues, but my knowledge of AMD hw is
still severely lacking.

On Wednesday, August 18th, 2021 at 15:18, Rodrigo Siqueira 
 wrote:


On 08/18, Simon Ser wrote:

Hm. This patch causes a regression for me. I was using primary + overlay
not covering the whole primary plane + cursor before. This patch breaks it.


Which branch are you using? Recently, I reverted part of that patch,
see:

   Revert "drm/amd/display: Fix overlay validation by considering cursors"


Right. This revert actually makes things worse. Prior to the revert the
overlay could be enabled without the cursor. With the revert the overlay
cannot be enabled at all, even if the cursor is disabled.


This patch makes the overlay plane very useless for me, because the primary
plane is always under the overlay plane.


I'm curious about your use case with overlay planes. Could you help me
to understand it better? If possible, describe:

1. Context and scenario
2. Compositor
3. Kernel version
4. If you know which IGT test describe your test?

I'm investigating overlay issues in our driver, and a userspace
perspective might help me.


I'm working on gamescope [1], Valve's gaming compositor. Our use-cases include
displaying (from bottom to top) a game in the background, a notification popup
over it in the overlay plane, and a cursor in the cursor plane. All of the
planes might be rotated. The game's buffer might be scaled and might not cover
the whole CRTC.

libliftoff [2] is used to provide vendor-agnostic KMS plane offload. In other
words, I'd prefer to avoid relying too much on hardware specific details, e.g.
I'd prefer to avoid hole-punching via a underlay (it might work on AMD hw, but
will fail on many other drivers).


Hi Simon,

Siqueria explained a bit below, but the problem is that we don't have 
dedicated cursor planes in hardware.


It's easiest to under the hardware cursor as being constrained within 
the DRM plane specifications. Each DRM plane maps to 1 (or 2) hardware 
pipes and the cursor has to be drawn along with it. The cursor will 
inherit the scale, bounds, and color management associated with the 
underlying pipes.


From the kernel display driver perspective that makes things quite 
difficult with the existing DRM API - we can only really guarantee you 
get HW cursor when the framebuffer covers the entire screen and it is 
unscaled or matches the scaling expected by the user.


Hole punching generally satisfies both of these since it's a transparent 
framebuffer that covers the entire screen.


The case that's slightly more complicated is when the overlay doesn't 
cover the entire screen but the primary plane does. We can still enable 
the cursor if the primary plane and overlay have a matching scale and 
color management - our display hardware can draw the cursor on multiple 
pipes. (Note: this statement only applies for DCN2.1+)


If the overlay plane does not cover the entire screen and the scale or 
the color management differs then we cannot enable the HW cursor plane. 
As you mouse over the bounds of the overlay you will see the cursor 
drawn differently on the primary and overlay pipe.


If the overlay plane and primary plane do not cover the entire screen 
then you will lose HW cursor outside of the union of their bounds.


Correct me if I'm wrong, but I think your usecase [1] falls under the 
category where:

1. Primary plane covers entire screen
2. Overlay plane does not cover the entire screen
3. Overlay plane is scaled

This isn't a support configuration because HW cursor cannot be drawn in 
the same position on both pipes.


I think you can see a similar usecase to [1] on Windows, but the 
difference is that the cursor is drawn on the "primary plane" instead of 
on top of the primary and overlay. I don't remember if DRM has a 
requirement that the cursor plane must be topmost, but we can't enable 
[1] as long as it is.


I don't know if you have more usecases in mind than [1], but just as 
some general recommendations I think you should only really use overlays 
when they fall under one of two categories:


1. You want to save power:

You will burn additional power for the overlay pipe.

But you will save power in use cases like video playback - where the 
decoder produces the framebuffer and we can avoid a shader composited 
copy with its associated GFX engine overhead and memory traffic.


2. You want more performance:

You will burn additional power for the overlay pipe.

On bandwidth constrained systems you can save significant memory 
bandwidth by avoiding the shader composition by allowing for direct 
scanout of game or other application buffers.


Your usecase [1] falls under this category, but as an aside I discourage 
trying to design usecases where the compositor requires the overlay for 
functional purposes.


Best regards,
Nicholas Kazlauskas



I'm

Re: [PATCH v3 5/6] drm/amd/display: Add DP 2.0 BIOS and DMUB Support

2021-08-20 Thread Kazlauskas, Nicholas

On 2021-08-19 2:58 p.m., Fangzhi Zuo wrote:

Parse DP2 encoder caps and hpo instance from bios

Signed-off-by: Fangzhi Zuo 
---
  drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 10 ++
  drivers/gpu/drm/amd/display/dc/bios/command_table2.c   | 10 ++
  .../drm/amd/display/dc/dcn30/dcn30_dio_link_encoder.c  |  4 
  drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h   |  6 ++
  drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h|  4 
  .../gpu/drm/amd/display/include/bios_parser_types.h| 10 ++
  drivers/gpu/drm/amd/include/atomfirmware.h |  6 ++
  7 files changed, 50 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c 
b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index 6dbde74c1e06..cdb5c027411a 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -1604,6 +1604,16 @@ static enum bp_result bios_parser_get_encoder_cap_info(
ATOM_ENCODER_CAP_RECORD_HBR3_EN) ? 1 : 0;
info->HDMI_6GB_EN = (record->encodercaps &
ATOM_ENCODER_CAP_RECORD_HDMI6Gbps_EN) ? 1 : 0;
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+   info->IS_DP2_CAPABLE = (record->encodercaps &
+   ATOM_ENCODER_CAP_RECORD_DP2) ? 1 : 0;
+   info->DP_UHBR10_EN = (record->encodercaps &
+   ATOM_ENCODER_CAP_RECORD_UHBR10_EN) ? 1 : 0;
+   info->DP_UHBR13_5_EN = (record->encodercaps &
+   ATOM_ENCODER_CAP_RECORD_UHBR13_5_EN) ? 1 : 0;
+   info->DP_UHBR20_EN = (record->encodercaps &
+   ATOM_ENCODER_CAP_RECORD_UHBR20_EN) ? 1 : 0;
+#endif
info->DP_IS_USB_C = (record->encodercaps &
ATOM_ENCODER_CAP_RECORD_USB_C_TYPE) ? 1 : 0;
  
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c

index f1f672a997d7..6e333b4af7d6 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
@@ -340,6 +340,13 @@ static enum bp_result transmitter_control_v1_7(
const struct command_table_helper *cmd = bp->cmd_helper;
struct dmub_dig_transmitter_control_data_v1_7 dig_v1_7 = {0};
  
+#if defined(CONFIG_DRM_AMD_DC_DCN)

+   uint8_t hpo_instance = (uint8_t)cntl->hpo_engine_id - ENGINE_ID_HPO_0;
+
+   if (dc_is_dp_signal(cntl->signal))
+   hpo_instance = (uint8_t)cntl->hpo_engine_id - 
ENGINE_ID_HPO_DP_0;
+#endif
+
dig_v1_7.phyid = cmd->phy_id_to_atom(cntl->transmitter);
dig_v1_7.action = (uint8_t)cntl->action;
  
@@ -353,6 +360,9 @@ static enum bp_result transmitter_control_v1_7(

dig_v1_7.hpdsel = cmd->hpd_sel_to_atom(cntl->hpd_sel);
dig_v1_7.digfe_sel = cmd->dig_encoder_sel_to_atom(cntl->engine_id);
dig_v1_7.connobj_id = (uint8_t)cntl->connector_obj_id.id;
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+   dig_v1_7.HPO_instance = hpo_instance;
+#endif
dig_v1_7.symclk_units.symclk_10khz = cntl->pixel_clock/10;
  
  	if (cntl->action == TRANSMITTER_CONTROL_ENABLE ||

diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_link_encoder.c 
b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_link_encoder.c
index 46ea39f5ef8d..6f3c2fb60790 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_link_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_link_encoder.c
@@ -192,6 +192,10 @@ void dcn30_link_encoder_construct(
enc10->base.features.flags.bits.IS_HBR3_CAPABLE =
bp_cap_info.DP_HBR3_EN;
enc10->base.features.flags.bits.HDMI_6GB_EN = 
bp_cap_info.HDMI_6GB_EN;
+   enc10->base.features.flags.bits.IS_DP2_CAPABLE = 
bp_cap_info.IS_DP2_CAPABLE;
+   enc10->base.features.flags.bits.IS_UHBR10_CAPABLE = 
bp_cap_info.DP_UHBR10_EN;
+   enc10->base.features.flags.bits.IS_UHBR13_5_CAPABLE = 
bp_cap_info.DP_UHBR13_5_EN;
+   enc10->base.features.flags.bits.IS_UHBR20_CAPABLE = 
bp_cap_info.DP_UHBR20_EN;


Please drop the DCN guards around this section - don't want to modify 
the bit field structure based on DCN vs DCE only.



enc10->base.features.flags.bits.DP_IS_USB_C =
bp_cap_info.DP_IS_USB_C;
} else {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h 
b/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
index fa3a725e11dc..b99efcf4712f 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
@@ -59,6 +59,12 @@ struct encoder_feature_support {
uint32_t IS_TPS3_CAPABLE:1;
uint32_t IS_TPS4_CAPABLE:1;
uint32_t HDMI_6GB_EN:1;
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+   uint32_t IS_DP2_CAPABLE:1;
+   

Re: [PATCH 2/6] drm/amd/display: Add DP 2.0 HPO Stream Encoder

2021-08-17 Thread Kazlauskas, Nicholas

On 2021-08-16 4:59 p.m., Fangzhi Zuo wrote:

HW Blocks:

 ++  +-+  +--+
 |  OPTC  |  | HDA |  | HUBP |
 ++  +-+  +--+
 |  ||
 |  ||
 HPO |==||
  |  |  v|
  |  |   +-+ |
  |  |   | APG | |
  |  |   +-+ |
  |  |  ||
  v  v  vv
+--+
|  HPO Stream Encoder  |
+--+

Signed-off-by: Fangzhi Zuo 
---
  .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |  35 +
  drivers/gpu/drm/amd/display/dc/dcn31/Makefile |   2 +-
  .../dc/dcn31/dcn31_hpo_dp_stream_encoder.c| 761 ++
  .../dc/dcn31/dcn31_hpo_dp_stream_encoder.h| 241 ++
  .../drm/amd/display/dc/dcn31/dcn31_resource.c |  85 ++
  .../gpu/drm/amd/display/dc/inc/core_types.h   |   4 +
  .../gpu/drm/amd/display/dc/inc/hw/hw_shared.h |   1 +
  .../amd/display/dc/inc/hw/stream_encoder.h|  79 ++
  drivers/gpu/drm/amd/display/dc/inc/resource.h |   4 +
  .../amd/display/include/grph_object_defs.h|  10 +
  .../drm/amd/display/include/grph_object_id.h  |   6 +
  11 files changed, 1227 insertions(+), 1 deletion(-)
  create mode 100644 
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.c
  create mode 100644 
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_encoder.h

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index df8a7718a85f..cffd9e6f44b2 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -466,6 +466,41 @@ void dcn10_log_hw_state(struct dc *dc,
  
  	log_mpc_crc(dc, log_ctx);
  
+#if defined(CONFIG_DRM_AMD_DC_DCN3_1)

+   {
+   int hpo_dp_link_enc_count = 0;
+
+   if (pool->hpo_dp_stream_enc_count > 0) {
+   DTN_INFO("DP HPO S_ENC:  Enabled  OTG   Format   Depth   Vid 
  SDP   Compressed  Link\n");
+   for (i = 0; i < pool->hpo_dp_stream_enc_count; i++) {
+   struct hpo_dp_stream_encoder_state 
hpo_dp_se_state = {0};
+   struct hpo_dp_stream_encoder *hpo_dp_stream_enc = 
pool->hpo_dp_stream_enc[i];
+
+   if (hpo_dp_stream_enc && 
hpo_dp_stream_enc->funcs->read_state) {
+   
hpo_dp_stream_enc->funcs->read_state(hpo_dp_stream_enc, &hpo_dp_se_state);
+
+   DTN_INFO("[%d]: %d%d   
%6s   %d %d %d%d %d\n",
+   hpo_dp_stream_enc->id - 
ENGINE_ID_HPO_DP_0,
+   
hpo_dp_se_state.stream_enc_enabled,
+   
hpo_dp_se_state.otg_inst,
+   (hpo_dp_se_state.pixel_encoding 
== 0) ? "4:4:4" :
+   
((hpo_dp_se_state.pixel_encoding == 1) ? "4:2:2" :
+   
(hpo_dp_se_state.pixel_encoding == 2) ? "4:2:0" : "Y-Only"),
+   
(hpo_dp_se_state.component_depth == 0) ? 6 :
+   
((hpo_dp_se_state.component_depth == 1) ? 8 :
+   
(hpo_dp_se_state.component_depth == 2) ? 10 : 12),
+   
hpo_dp_se_state.vid_stream_enabled,
+   
hpo_dp_se_state.sdp_enabled,
+   
hpo_dp_se_state.compressed_format,
+   
hpo_dp_se_state.mapped_to_link_enc);
+   }
+   }
+
+   DTN_INFO("\n");
+   }
+   }
+#endif
+
DTN_INFO_END();
  }
  
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/Makefile b/drivers/gpu/drm/amd/display/dc/dcn31/Makefile

index bc2087f6dcb2..8b811f589524 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/Makefile
@@ -12,7 +12,7 @@
  
  DCN31 = dcn31_resource.o dcn31_hubbub.o dcn31_hwseq.o dcn31_init.o dcn31_hubp.o \

dcn31_dccg.o dcn31_optc.o dcn31_dio_link_encoder.o dcn31_panel_cntl.o \
-   dcn31_apg.o
+   dcn31_apg.o dcn31_hpo_dp_stream_encoder.o
  
  ifdef CONFIG_X86

  CFLAGS_$(AMDDALPATH)/dc/dcn31/dcn31_resource.o := -msse
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_stream_enco

Re: [PATCH] drm/amdgpu/display: fold DRM_AMD_DC_DCN3_1 into DRM_AMD_DC_DCN

2021-06-21 Thread Kazlauskas, Nicholas

On 2021-06-21 4:58 p.m., Alex Deucher wrote:

No need for a separate flag now that DCN3.1 is not in bring up.
Fold into DRM_AMD_DC_DCN like previous DCN IPs.

Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/Kconfig   |  7 --
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 22 +--
  .../amd/display/amdgpu_dm/amdgpu_dm_hdcp.c|  4 
  drivers/gpu/drm/amd/display/dc/Makefile   |  2 --
  .../drm/amd/display/dc/bios/bios_parser2.c|  7 +-
  .../display/dc/bios/command_table_helper2.c   |  6 +
  .../gpu/drm/amd/display/dc/clk_mgr/Makefile   |  2 --
  .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |  7 --
  .../display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c  |  2 --
  drivers/gpu/drm/amd/display/dc/core/dc.c  |  8 +++
  drivers/gpu/drm/amd/display/dc/core/dc_link.c |  6 ++---
  .../gpu/drm/amd/display/dc/core/dc_resource.c | 10 ++---
  .../gpu/drm/amd/display/dc/core/dc_stream.c   |  4 
  drivers/gpu/drm/amd/display/dc/dc.h   | 14 +---
  drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c  |  3 +--
  drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h  |  3 +--
  .../gpu/drm/amd/display/dc/dce/dce_hwseq.h|  6 -
  .../display/dc/dce110/dce110_hw_sequencer.c   |  4 ++--
  .../drm/amd/display/dc/dcn10/dcn10_hubbub.h   |  9 +---
  .../amd/display/dc/dcn10/dcn10_link_encoder.h |  9 +---
  .../gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h |  8 ---
  .../drm/amd/display/dc/dcn20/dcn20_hubbub.h   |  2 --
  .../gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h | 10 -
  .../drm/amd/display/dc/dcn20/dcn20_hwseq.c| 19 +++-
  .../drm/amd/display/dc/dcn20/dcn20_resource.c | 16 --
  .../drm/amd/display/dc/dcn30/dcn30_hwseq.c|  2 --
  .../drm/amd/display/dc/dcn31/dcn31_hwseq.c|  2 --
  drivers/gpu/drm/amd/display/dc/dm_cp_psp.h|  2 --
  drivers/gpu/drm/amd/display/dc/dml/Makefile   |  6 -
  .../dc/dml/dcn31/display_mode_vba_31.c|  2 --
  .../dc/dml/dcn31/display_rq_dlg_calc_31.c |  3 ---
  .../drm/amd/display/dc/dml/display_mode_lib.c |  9 ++--
  .../drm/amd/display/dc/dml/display_mode_lib.h |  2 --
  .../amd/display/dc/dml/display_mode_structs.h |  4 
  .../drm/amd/display/dc/dml/display_mode_vba.c | 12 --
  .../drm/amd/display/dc/dml/display_mode_vba.h |  6 -
  .../gpu/drm/amd/display/dc/gpio/hw_factory.c  |  2 --
  .../drm/amd/display/dc/gpio/hw_translate.c|  2 --
  .../gpu/drm/amd/display/dc/inc/core_types.h   |  6 -
  .../gpu/drm/amd/display/dc/inc/hw/clk_mgr.h   |  2 --
  drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h  |  6 -
  .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |  2 --
  .../drm/amd/display/dc/inc/hw/link_encoder.h  | 14 +---
  .../gpu/drm/amd/display/dc/inc/hw/mem_input.h |  2 --
  .../amd/display/dc/inc/hw/timing_generator.h  |  2 --
  .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |  2 --
  drivers/gpu/drm/amd/display/dc/irq/Makefile   |  2 --
  .../display/dc/irq/dcn31/irq_service_dcn31.h  |  3 ---
  drivers/gpu/drm/amd/display/dmub/dmub_srv.h   |  8 ---
  .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   | 14 +---
  drivers/gpu/drm/amd/display/dmub/src/Makefile |  6 +
  .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |  4 
  .../gpu/drm/amd/display/include/dal_asic_id.h |  2 --
  .../gpu/drm/amd/display/include/dal_types.h   |  2 --
  .../drm/amd/display/modules/hdcp/hdcp_log.c   |  2 --
  .../drm/amd/display/modules/hdcp/hdcp_psp.c   | 18 ---
  .../drm/amd/display/modules/hdcp/hdcp_psp.h   | 13 ++-
  .../drm/amd/display/modules/inc/mod_hdcp.h| 10 -
  58 files changed, 45 insertions(+), 319 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/Kconfig 
b/drivers/gpu/drm/amd/display/Kconfig
index 5b5f36c80efb..7dffc04a557e 100644
--- a/drivers/gpu/drm/amd/display/Kconfig
+++ b/drivers/gpu/drm/amd/display/Kconfig
@@ -31,13 +31,6 @@ config DRM_AMD_DC_SI
  by default. This includes Tahiti, Pitcairn, Cape Verde, Oland.
  Hainan is not supported by AMD DC and it has no physical DCE6.
  
-config DRM_AMD_DC_DCN3_1

-bool "DCN 3.1 family"
-depends on DRM_AMD_DC_DCN
-help
-Choose this option if you want to have
-DCN3.1 family support for display engine
-
  config DEBUG_KERNEL_DC
bool "Enable kgdb break in DC"
depends on DRM_AMD_DC
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index d069661abe45..b5b5ccf0ed71 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -110,10 +110,8 @@ MODULE_FIRMWARE(FIRMWARE_VANGOGH_DMUB);
  MODULE_FIRMWARE(FIRMWARE_DIMGREY_CAVEFISH_DMUB);
  #define FIRMWARE_BEIGE_GOBY_DMUB "amdgpu/beige_goby_dmcub.bin"
  MODULE_FIRMWARE(FIRMWARE_BEIGE_GOBY_DMUB);
-#if defined(CONFIG_DRM_AMD

Re: [PATCH 2/2] drm/amdgpu/dc: fix DCN3.1 FP handling

2021-06-04 Thread Kazlauskas, Nicholas

On 2021-06-04 2:16 p.m., Alex Deucher wrote:

Missing proper DC_FP_START/DC_FP_END.

Signed-off-by: Alex Deucher 


Thanks for catching these.

Series is Reviewed-by: Nicholas Kazlauskas

Regards,
Nicholas Kazlauskas


---
  .../drm/amd/display/dc/dcn31/dcn31_resource.c  | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
index af978d2cb25f..0d6cb6caad81 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
@@ -1633,7 +1633,7 @@ static void dcn31_update_soc_for_wm_a(struct dc *dc, 
struct dc_state *context)
}
  }
  
-static void dcn31_calculate_wm_and_dlg(

+static void dcn31_calculate_wm_and_dlg_fp(
struct dc *dc, struct dc_state *context,
display_e2e_pipe_params_st *pipes,
int pipe_cnt,
@@ -1759,6 +1759,17 @@ static void dcn31_calculate_wm_and_dlg(
dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
  }
  
+static void dcn31_calculate_wm_and_dlg(

+   struct dc *dc, struct dc_state *context,
+   display_e2e_pipe_params_st *pipes,
+   int pipe_cnt,
+   int vlevel)
+{
+   DC_FP_START();
+   dcn31_calculate_wm_and_dlg_fp(dc, context, pipes, pipe_cnt, vlevel);
+   DC_FP_END();
+}
+
  static struct dc_cap_funcs cap_funcs = {
.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
  };
@@ -1890,6 +1901,8 @@ static bool dcn31_resource_construct(
struct dc_context *ctx = dc->ctx;
struct irq_service_init_data init_data;
  
+	DC_FP_START();

+
ctx->dc_bios->regs = &bios_regs;
  
  	pool->base.res_cap = &res_cap_dcn31;

@@ -2152,10 +2165,13 @@ static bool dcn31_resource_construct(
  
  	dc->cap_funcs = cap_funcs;
  
+	DC_FP_END();

+
return true;
  
  create_fail:
  
+	DC_FP_END();

dcn31_resource_destruct(pool);
  
  	return false;




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: make backlight setting failure messages debug

2021-05-21 Thread Kazlauskas, Nicholas

On 2021-05-21 12:08 a.m., Alex Deucher wrote:

Avoid spamming the log.  The backlight controller on DCN chips
gets powered down when the display is off, so if you attempt to
set the backlight level when the display is off, you'll get this
message.  This isn't a problem as we cache the requested backlight
level if it's adjusted when the display is off and set it again
during modeset.

Signed-off-by: Alex Deucher 
Cc: nicholas.c...@amd.com
Cc: harry.wentl...@amd.com


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index b8026c1baf36..c1f7456aeaa0 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3506,7 +3506,7 @@ static int amdgpu_dm_backlight_set_level(struct 
amdgpu_display_manager *dm,
rc = dc_link_set_backlight_level_nits(link[i], true, 
brightness[i],
AUX_BL_DEFAULT_TRANSITION_TIME_MS);
if (!rc) {
-   DRM_ERROR("DM: Failed to update backlight via AUX on 
eDP[%d]\n", i);
+   DRM_DEBUG("DM: Failed to update backlight via AUX on 
eDP[%d]\n", i);
break;
}
}
@@ -3514,7 +3514,7 @@ static int amdgpu_dm_backlight_set_level(struct 
amdgpu_display_manager *dm,
for (i = 0; i < dm->num_of_edps; i++) {
rc = dc_link_set_backlight_level(dm->backlight_link[i], 
brightness[i], 0);
if (!rc) {
-   DRM_ERROR("DM: Failed to update backlight on 
eDP[%d]\n", i);
+   DRM_DEBUG("DM: Failed to update backlight on 
eDP[%d]\n", i);
break;
}
}



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: take dc_lock in short pulse handler only

2021-05-19 Thread Kazlauskas, Nicholas

On 2021-05-19 4:55 p.m., Aurabindo Pillai wrote:

[Why]
Conditions that end up modifying the global dc state must be locked.
However, during mst allocate payload sequence, lock is already taken.
With StarTech 1.2 DP hub, we get an HPD RX interrupt for a reason other
than to indicate down reply availability right after sending payload
allocation. The handler again takes dc lock before calling the
dc's HPD RX handler. Due to this contention, the DRM thread which waits
for MST down reply never gets a chance to finish its waiting
successfully and ends up timing out. Once the lock is released, the hpd
rx handler fires and goes ahead to read from the MST HUB, but now its
too late and the HUB doesnt lightup all displays since DRM lacks error
handling when payload allocation fails.

[How]
Take lock only if there is a change in link status or if automated test
pattern bit is set. The latter fixes the null pointer dereference when
running certain DP Link Layer Compliance test.

Signed-off-by: Aurabindo Pillai 


Discussed this a bit offline and I'd *really* like the proper interface 
in sooner rather than later.


Conditional locking is almost always a sign of a bug, in this case we 
know it's OK but someone can change the function underneath later 
without understanding that we're duplicating some of the checking logic 
in the upper layer.


I don't think the code changes enough in this area for this to happen 
(as it's spec based), but please be mindful and consider splitting the 
checking logic (which is thread safe) out with the link loss logic (the 
functional bit, that isn't thread safe).


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 +--
  .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |  2 +-
  .../gpu/drm/amd/display/dc/inc/dc_link_dp.h   |  4 
  3 files changed, 22 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e79910cc179c..2c9d099adfc2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -28,6 +28,7 @@
  
  #include "dm_services_types.h"

  #include "dc.h"
+#include "dc_link_dp.h"
  #include "dc/inc/core_types.h"
  #include "dal_asic_id.h"
  #include "dmub/dmub_srv.h"
@@ -2740,6 +2741,7 @@ static void handle_hpd_rx_irq(void *param)
enum dc_connection_type new_connection_type = dc_connection_none;
struct amdgpu_device *adev = drm_to_adev(dev);
union hpd_irq_data hpd_irq_data;
+   bool lock_flag = 0;
  
  	memset(&hpd_irq_data, 0, sizeof(hpd_irq_data));
  
@@ -2769,15 +2771,28 @@ static void handle_hpd_rx_irq(void *param)

}
}
  
-	if (!amdgpu_in_reset(adev)) {

+   /*
+* TODO: We need the lock to avoid touching DC state while it's being
+* modified during automated compliance testing, or when link loss
+* happens. While this should be split into subhandlers and proper
+* interfaces to avoid having to conditionally lock like this in the
+* outer layer, we need this workaround temporarily to allow MST
+* lightup in some scenarios to avoid timeout.
+*/
+   if (!amdgpu_in_reset(adev) &&
+   (hpd_rx_irq_check_link_loss_status(dc_link, &hpd_irq_data) ||
+hpd_irq_data.bytes.device_service_irq.bits.AUTOMATED_TEST)) {
mutex_lock(&adev->dm.dc_lock);
+   lock_flag = 1;
+   }
+
  #ifdef CONFIG_DRM_AMD_DC_HDCP
result = dc_link_handle_hpd_rx_irq(dc_link, &hpd_irq_data, NULL);
  #else
result = dc_link_handle_hpd_rx_irq(dc_link, NULL, NULL);
  #endif
+   if (!amdgpu_in_reset(adev) && lock_flag)
mutex_unlock(&adev->dm.dc_lock);
-   }
  
  out:

if (result && !is_mst_root_connector) {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 9e08410bfdfd..32fb9cdbd980 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -2070,7 +2070,7 @@ enum dc_status read_hpd_rx_irq_data(
return retval;
  }
  
-static bool hpd_rx_irq_check_link_loss_status(

+bool hpd_rx_irq_check_link_loss_status(
struct dc_link *link,
union hpd_irq_data *hpd_irq_dpcd_data)
  {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h 
b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
index ffc3f2c63db8..7dd8bca542b9 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
@@ -68,6 +68,10 @@ bool perform_link_training_with_retries(
enum signal_type signal,
bool do_fallback);
  
+bool hpd_rx_irq_check_link_loss_status(

+   struct dc_link *link,
+   union hpd_irq_data *hpd_irq_dpcd_data);
+
  bool is_mst_supported(struct dc_link *link);
  

Re: [PATCH] drm/amd/display: Move plane code from amdgpu_dm to amdgpu_dm_plane

2021-05-07 Thread Kazlauskas, Nicholas

On 2021-05-07 10:39 a.m., Rodrigo Siqueira wrote:

The amdgpu_dm file contains most of the code that works as an interface
between DRM API and Display Core. We maintain all the plane operations
inside amdgpu_dm; this commit extracts the plane code to its specific
file named amdgpu_dm_plane. This commit does not introduce any
functional change to the functions; it only changes some static
functions to global and adds some minor adjustments related to the copy
from one place to another.

Signed-off-by: Rodrigo Siqueira 
---
  .../gpu/drm/amd/display/amdgpu_dm/Makefile|9 +-
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1479 +---
  .../amd/display/amdgpu_dm/amdgpu_dm_plane.c   | 1496 +
  .../amd/display/amdgpu_dm/amdgpu_dm_plane.h   |   56 +
  4 files changed, 1559 insertions(+), 1481 deletions(-)
  create mode 100644 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
  create mode 100644 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile 
b/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
index 9a3b7bf8ab0b..6542ef0ff83e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/Makefile
@@ -23,9 +23,12 @@
  # Makefile for the 'dm' sub-component of DAL.
  # It provides the control and status of dm blocks.
  
-

-
-AMDGPUDM = amdgpu_dm.o amdgpu_dm_irq.o amdgpu_dm_mst_types.o amdgpu_dm_color.o
+AMDGPUDM := \
+   amdgpu_dm.o \
+   amdgpu_dm_color.o \
+   amdgpu_dm_irq.o \
+   amdgpu_dm_mst_types.o \
+   amdgpu_dm_plane.o
  
  ifneq ($(CONFIG_DRM_AMD_DC),)

  AMDGPUDM += amdgpu_dm_services.o amdgpu_dm_helpers.o amdgpu_dm_pp_smu.o
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index cc048c348a92..60ddb4d8be6c 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -44,6 +44,7 @@
  #include "amdgpu_ucode.h"
  #include "atom.h"
  #include "amdgpu_dm.h"
+#include "amdgpu_dm_plane.h"
  #ifdef CONFIG_DRM_AMD_DC_HDCP
  #include "amdgpu_dm_hdcp.h"
  #include 
@@ -181,10 +182,6 @@ static int amdgpu_dm_initialize_drm_device(struct 
amdgpu_device *adev);
  /* removes and deallocates the drm structures, created by the above function 
*/
  static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm);
  
-static int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm,

-   struct drm_plane *plane,
-   unsigned long possible_crtcs,
-   const struct dc_plane_cap *plane_cap);
  static int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm,
   struct drm_plane *plane,
   uint32_t link_index);
@@ -203,9 +200,6 @@ static void amdgpu_dm_atomic_commit_tail(struct 
drm_atomic_state *state);
  static int amdgpu_dm_atomic_check(struct drm_device *dev,
  struct drm_atomic_state *state);
  
-static void handle_cursor_update(struct drm_plane *plane,

-struct drm_plane_state *old_plane_state);
-
  static void amdgpu_dm_set_psr_caps(struct dc_link *link);
  static bool amdgpu_dm_psr_enable(struct dc_stream_state *stream);
  static bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream);
@@ -4125,925 +4119,12 @@ static const struct drm_encoder_funcs 
amdgpu_dm_encoder_funcs = {
.destroy = amdgpu_dm_encoder_destroy,
  };
  
-

-static void get_min_max_dc_plane_scaling(struct drm_device *dev,
-struct drm_framebuffer *fb,
-int *min_downscale, int *max_upscale)
-{
-   struct amdgpu_device *adev = drm_to_adev(dev);
-   struct dc *dc = adev->dm.dc;
-   /* Caps for all supported planes are the same on DCE and DCN 1 - 3 */
-   struct dc_plane_cap *plane_cap = &dc->caps.planes[0];
-
-   switch (fb->format->format) {
-   case DRM_FORMAT_P010:
-   case DRM_FORMAT_NV12:
-   case DRM_FORMAT_NV21:
-   *max_upscale = plane_cap->max_upscale_factor.nv12;
-   *min_downscale = plane_cap->max_downscale_factor.nv12;
-   break;
-
-   case DRM_FORMAT_XRGB16161616F:
-   case DRM_FORMAT_ARGB16161616F:
-   case DRM_FORMAT_XBGR16161616F:
-   case DRM_FORMAT_ABGR16161616F:
-   *max_upscale = plane_cap->max_upscale_factor.fp16;
-   *min_downscale = plane_cap->max_downscale_factor.fp16;
-   break;
-
-   default:
-   *max_upscale = plane_cap->max_upscale_factor.argb;
-   *min_downscale = plane_cap->max_downscale_factor.argb;
-   break;
-   }
-
-   /*
-* A factor of 1 in the plane_cap means to not allow scaling, ie. use a
-* scaling factor of 1.0 == 1000 units.
-   

Re: [PATCH] drm/amd/display: Make underlay rules less strict

2021-05-07 Thread Kazlauskas, Nicholas

On 2021-05-07 10:37 a.m., Rodrigo Siqueira wrote:

Currently, we reject all conditions where the underlay plane goes
outside the overlay plane limits, which is not entirely correct since we
reject some valid cases like the ones illustrated below:

   ++  ++
   |   Overlay plane|  |   Overlay plane|
   ||  |+---|--+
   | +--+   |  ||   |  |
   | |  |   |  ||   |  |
   ++  ++  |
 | Primary plane|   +--+
 |  (underlay)  |
 +--+
   +-+--+---+  ++
   |Overlay plane   |  |Overlay plane   |
+-|+   |  |   +--+
| ||   |  |   || |
| ||   |  |   || |
| ||   |  |   || |
+-|+   |  |   +--+
   ++  ++

This patch fixes this issue by only rejecting commit requests where the
underlay is entirely outside the overlay limits. After applying this
patch, a set of subtests related to kms_plane, kms_plane_alpha_blend,
and kms_plane_scaling will pass.

Signed-off-by: Rodrigo Siqueira 


What's the size of the overlay plane in your examples? If the overlay 
plane does not cover the entire screen then this patch is incorrect.


We don't want to be enabling the cursor on multiple pipes and the checks 
in DC to allow disabling cursor on bottom pipes only work if the 
underlay is entirely contained within the overlay.


In the case where the primary (underlay) plane extends beyond the screen 
boundaries it should be preclipped by userspace or earlier in the DM 
code before this check.


Feel free to follow up with clarification, but for now this patch is a 
NAK from me.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 8 
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index cc048c348a92..15006aafc630 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -10098,10 +10098,10 @@ static int validate_overlay(struct drm_atomic_state 
*state)
return 0;
  
  	/* Perform the bounds check to ensure the overlay plane covers the primary */

-   if (primary_state->crtc_x < overlay_state->crtc_x ||
-   primary_state->crtc_y < overlay_state->crtc_y ||
-   primary_state->crtc_x + primary_state->crtc_w > overlay_state->crtc_x 
+ overlay_state->crtc_w ||
-   primary_state->crtc_y + primary_state->crtc_h > overlay_state->crtc_y 
+ overlay_state->crtc_h) {
+   if (primary_state->crtc_x + primary_state->crtc_w < 
overlay_state->crtc_x ||
+   primary_state->crtc_x > overlay_state->crtc_x + 
overlay_state->crtc_w ||
+   primary_state->crtc_y > overlay_state->crtc_y + 
overlay_state->crtc_h ||
+   primary_state->crtc_y + primary_state->crtc_h < 
overlay_state->crtc_y) {
DRM_DEBUG_ATOMIC("Overlay plane is enabled with hardware cursor but 
does not fully cover primary plane\n");
return -EINVAL;
}



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Reject non-zero src_y and src_x for video planes

2021-04-22 Thread Kazlauskas, Nicholas

On 2021-04-22 7:20 p.m., Harry Wentland wrote:

[Why]
This hasn't been well tested and leads to complete system hangs on DCN1
based systems, possibly others.

The system hang can be reproduced by gesturing the video on the YouTube
Android app on ChromeOS into full screen.

[How]
Reject atomic commits with non-zero drm_plane_state.src_x or src_y values.

Signed-off-by: Harry Wentland 
Change-Id: I5e951f95fc87c86517b9ea6e094d73603184f00b


Drop the Change-ID on submit.


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 4b3b9599aaf7..99fd555ebb91 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -2825,6 +2825,13 @@ static int fill_dc_scaling_info(const struct 
drm_plane_state *state,
scaling_info->src_rect.x = state->src_x >> 16;
scaling_info->src_rect.y = state->src_y >> 16;
  
+

+   if (state->fb &&
+   state->fb->format->format == DRM_FORMAT_NV12 &&
+   (scaling_info->src_rect.x != 0 ||
+scaling_info->src_rect.y != 0))
+   return -EINVAL;
+


Would like to see a comment in the source code similar to what's 
explained in the commit message so if people skim through the code they 
understand some of the background on this.


I'd also like to know if this is generic across all DCN or specific to 
DCN1. For now at least we can disable it generically I think.


With the commit message updated and source commented this patch is:

Reviewed-by: Nicholas Kazlauskas 


scaling_info->src_rect.width = state->src_w >> 16;
if (scaling_info->src_rect.width == 0)
return -EINVAL;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: add DMUB outbox event IRQ source define/complete/debug flag

2021-04-06 Thread Kazlauskas, Nicholas

On 2021-04-06 10:22 a.m., Shih, Jude wrote:

[AMD Official Use Only - Internal Distribution Only]

Hi Nicholas,

Does this completion need to be on the amdgpu device itself?

I would prefer if we keep this as needed within DM itself if possible.

=> do you mean move it to amdgpu_display_manager in amdgpu_dm.h as global 
variable?


There's a amdgpu_display_manager per device, but yes, it should be 
contained in there if possible since it's display code.




My problem with still leaving this as DC_ENABLE_DMUB_AUX is we shouldn't 
require the user to have to flip this on by default later. I think I'd prefer 
this still as a DISABLE option if we want to leave it for users to debug any 
potential issues.
=> do you mean DC_ENABLE_DMUB_AUX = 0x10 => DC_DISABLE_DMUB_AUX = 0x10
and amdgpu_dc_debug_mask = 0x10 as default to turn it off?


Don't modify the default debug mask and leave it alone. We can still 
have DC_DISABLE_DMUB_AUX = 0x10 as a user debug option if they have 
firmware that supports this.


Flag or not, we need a mechanism from driver to firmware to query 
whether the firmware supports it in the first place. It's not sufficient 
to fully control this feature with just a debug flag, there needs to be 
a cap check regardless with the firmware for support.


Older firmware won't implement this check and therefore won't enable the 
feature.


Newer (or test) firmware could enable this feature and report back to 
driver that it does support it.


Driver can then decide to enable this based on 
dc->debug.dmub_aux_support or something similar to that - it can be 
false or ASIC that we won't be supporting this on, but for ASIC that we 
do we can leave it off by default until it's production ready.


For developer testing we can hardcode the flag = true, I think the DC 
debug flags here in AMDGPU base driver only have value if we want 
general end user or validation to use this to debug potential issues.


Regards,
Nicholas Kazlauskas



Thanks,

Best Regards,

Jude

-Original Message-
From: Kazlauskas, Nicholas 
Sent: Tuesday, April 6, 2021 10:04 PM
To: Shih, Jude ; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Lin, Wayne ; 
Hung, Cruise 
Subject: Re: [PATCH] drm/amdgpu: add DMUB outbox event IRQ source 
define/complete/debug flag

On 2021-04-06 9:40 a.m., Jude Shih wrote:

[Why & How]
We use outbox interrupt that allows us to do the AUX via DMUB
Therefore, we need to add some irq source related definition in the
header files; Also, I added debug flag that allows us to turn it
on/off for testing purpose.

Signed-off-by: Jude Shih 
---
   drivers/gpu/drm/amd/amdgpu/amdgpu.h   | 2 ++
   drivers/gpu/drm/amd/include/amd_shared.h  | 3 ++-
   drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h | 2 ++
   3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 963ecfd84347..7e64fc5e0dcd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -923,6 +923,7 @@ struct amdgpu_device {
struct amdgpu_irq_src   pageflip_irq;
struct amdgpu_irq_src   hpd_irq;
struct amdgpu_irq_src   dmub_trace_irq;
+   struct amdgpu_irq_src   dmub_outbox_irq;
   
   	/* rings */

u64 fence_context;
@@ -1077,6 +1078,7 @@ struct amdgpu_device {
   
   	boolin_pci_err_recovery;

struct pci_saved_state  *pci_state;
+   struct completion dmub_aux_transfer_done;


Does this completion need to be on the amdgpu device itself?

I would prefer if we keep this as needed within DM itself if possible.


   };
   
   static inline struct amdgpu_device *drm_to_adev(struct drm_device

*ddev) diff --git a/drivers/gpu/drm/amd/include/amd_shared.h
b/drivers/gpu/drm/amd/include/amd_shared.h
index 43ed6291b2b8..097672cc78a1 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -227,7 +227,8 @@ enum DC_DEBUG_MASK {
DC_DISABLE_PIPE_SPLIT = 0x1,
DC_DISABLE_STUTTER = 0x2,
DC_DISABLE_DSC = 0x4,
-   DC_DISABLE_CLOCK_GATING = 0x8
+   DC_DISABLE_CLOCK_GATING = 0x8,
+   DC_ENABLE_DMUB_AUX = 0x10,


My problem with still leaving this as DC_ENABLE_DMUB_AUX is we shouldn't 
require the user to have to flip this on by default later. I think I'd prefer 
this still as a DISABLE option if we want to leave it for users to debug any 
potential issues.

If there's no value in having end users debug issues by setting this bit then we 
should keep it as a dc->debug default in DCN resource.

Regards,
Nicholas Kazlauskas


   };
   
   enum amd_dpm_forced_level;

diff --git a/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
b/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
i

Re: [PATCH] drm/amdgpu: add DMUB outbox event IRQ source define/complete/debug flag

2021-04-06 Thread Kazlauskas, Nicholas

On 2021-04-06 9:40 a.m., Jude Shih wrote:

[Why & How]
We use outbox interrupt that allows us to do the AUX via DMUB
Therefore, we need to add some irq source related definition
in the header files;
Also, I added debug flag that allows us to turn it on/off
for testing purpose.

Signed-off-by: Jude Shih 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   | 2 ++
  drivers/gpu/drm/amd/include/amd_shared.h  | 3 ++-
  drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h | 2 ++
  3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 963ecfd84347..7e64fc5e0dcd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -923,6 +923,7 @@ struct amdgpu_device {
struct amdgpu_irq_src   pageflip_irq;
struct amdgpu_irq_src   hpd_irq;
struct amdgpu_irq_src   dmub_trace_irq;
+   struct amdgpu_irq_src   dmub_outbox_irq;
  
  	/* rings */

u64 fence_context;
@@ -1077,6 +1078,7 @@ struct amdgpu_device {
  
  	boolin_pci_err_recovery;

struct pci_saved_state  *pci_state;
+   struct completion dmub_aux_transfer_done;


Does this completion need to be on the amdgpu device itself?

I would prefer if we keep this as needed within DM itself if possible.


  };
  
  static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)

diff --git a/drivers/gpu/drm/amd/include/amd_shared.h 
b/drivers/gpu/drm/amd/include/amd_shared.h
index 43ed6291b2b8..097672cc78a1 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -227,7 +227,8 @@ enum DC_DEBUG_MASK {
DC_DISABLE_PIPE_SPLIT = 0x1,
DC_DISABLE_STUTTER = 0x2,
DC_DISABLE_DSC = 0x4,
-   DC_DISABLE_CLOCK_GATING = 0x8
+   DC_DISABLE_CLOCK_GATING = 0x8,
+   DC_ENABLE_DMUB_AUX = 0x10,


My problem with still leaving this as DC_ENABLE_DMUB_AUX is we shouldn't 
require the user to have to flip this on by default later. I think I'd 
prefer this still as a DISABLE option if we want to leave it for users 
to debug any potential issues.


If there's no value in having end users debug issues by setting this bit 
then we should keep it as a dc->debug default in DCN resource.


Regards,
Nicholas Kazlauskas


  };
  
  enum amd_dpm_forced_level;

diff --git a/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h 
b/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
index e2bffcae273a..754170a86ea4 100644
--- a/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
+++ b/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
@@ -1132,5 +1132,7 @@
  
  #define DCN_1_0__SRCID__DMCUB_OUTBOX_HIGH_PRIORITY_READY_INT   0x68

  #define DCN_1_0__CTXID__DMCUB_OUTBOX_HIGH_PRIORITY_READY_INT   6
+#define DCN_1_0__SRCID__DMCUB_OUTBOX_LOW_PRIORITY_READY_INT0x68 // 
DMCUB_IHC_outbox1_ready_int IHC_DMCUB_outbox1_ready_int_ack 
DMCUB_OUTBOX_LOW_PRIORITY_READY_INTERRUPT DISP_INTERRUPT_STATUS_CONTINUE24 
Level/Pulse
+#define DCN_1_0__CTXID__DMCUB_OUTBOX_LOW_PRIORITY_READY_INT8
  
  #endif // __IRQSRCS_DCN_1_0_H__




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: add DMUB outbox event IRQ source define/complete/debug flag

2021-04-06 Thread Kazlauskas, Nicholas

On 2021-03-31 11:21 p.m., Jude Shih wrote:

[Why & How]
We use outbox interrupt that allows us to do the AUX via DMUB
Therefore, we need to add some irq source related definition
in the header files;
Also, I added debug flag that allows us to turn it on/off
for testing purpose.


Missing your signed-off-by here, please recommit with

git commit --amend --sign


---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   | 2 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   | 2 +-
  drivers/gpu/drm/amd/include/amd_shared.h  | 3 ++-
  drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h | 2 ++
  4 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 963ecfd84347..479c8a28a3a9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -923,6 +923,7 @@ struct amdgpu_device {
struct amdgpu_irq_src   pageflip_irq;
struct amdgpu_irq_src   hpd_irq;
struct amdgpu_irq_src   dmub_trace_irq;
+   struct amdgpu_irq_src   outbox_irq;
  
  	/* rings */

u64 fence_context;
@@ -1077,6 +1078,7 @@ struct amdgpu_device {
  
  	boolin_pci_err_recovery;

struct pci_saved_state  *pci_state;
+   struct completion dmub_aux_transfer_done;
  };
  
  static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 6a06234dbcad..0b88e13f5a7b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -159,7 +159,7 @@ int amdgpu_smu_pptable_id = -1;
   * PSR (bit 3) disabled by default
   */
  uint amdgpu_dc_feature_mask = 2;
-uint amdgpu_dc_debug_mask;
+uint amdgpu_dc_debug_mask = 0x10;


If this is intended to be enabled by default then it shouldn't be a 
debug flag. Please either leave the default alone or fully switch over 
to DMCUB AUX support for ASIC that support it.


If you don't already have a check from driver to DMCUB firmware to 
ensure that the firmware itself supports it you'd need that as well - 
users can be running older firmware (like the firmware that originally 
released with DCN2.1/DCN3.0 support) and that wouldn't support this feature.


My recommendation:
- Add a command to check for DMUB AUX capability or add bits to the 
metadata to indicate that the firmware does support it
- Assume that the DMUB AUX implementation is solid and a complete 
replacement for existing AUX support on firmware that does support it
- Add a debug flag like DC_DISABLE_DMUB_AUX for optionally debugging 
issues if they arise



  int amdgpu_async_gfx_ring = 1;
  int amdgpu_mcbp;
  int amdgpu_discovery = -1;
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h 
b/drivers/gpu/drm/amd/include/amd_shared.h
index 43ed6291b2b8..097672cc78a1 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -227,7 +227,8 @@ enum DC_DEBUG_MASK {
DC_DISABLE_PIPE_SPLIT = 0x1,
DC_DISABLE_STUTTER = 0x2,
DC_DISABLE_DSC = 0x4,
-   DC_DISABLE_CLOCK_GATING = 0x8
+   DC_DISABLE_CLOCK_GATING = 0x8,
+   DC_ENABLE_DMUB_AUX = 0x10,
  };
  
  enum amd_dpm_forced_level;

diff --git a/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h 
b/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
index e2bffcae273a..754170a86ea4 100644
--- a/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
+++ b/drivers/gpu/drm/amd/include/ivsrcid/dcn/irqsrcs_dcn_1_0.h
@@ -1132,5 +1132,7 @@
  
  #define DCN_1_0__SRCID__DMCUB_OUTBOX_HIGH_PRIORITY_READY_INT   0x68

  #define DCN_1_0__CTXID__DMCUB_OUTBOX_HIGH_PRIORITY_READY_INT   6
+#define DCN_1_0__SRCID__DMCUB_OUTBOX_LOW_PRIORITY_READY_INT0x68 // 
DMCUB_IHC_outbox1_ready_int IHC_DMCUB_outbox1_ready_int_ack 
DMCUB_OUTBOX_LOW_PRIORITY_READY_INTERRUPT DISP_INTERRUPT_STATUS_CONTINUE24 
Level/Pulse
+#define DCN_1_0__CTXID__DMCUB_OUTBOX_LOW_PRIORITY_READY_INT8


This technically isn't on DCN_1_0 but I guess we've been using this file 
for all the DCNs.


I do wish this was labeled DCN_2_1 instead to make it more explicit but 
I guess this is fine for now.


Regards,
Nicholas Kazlauskas

  
  #endif // __IRQSRCS_DCN_1_0_H__




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/6] amd/display: fail on cursor plane without an underlying plane

2021-03-08 Thread Kazlauskas, Nicholas

On 2021-03-08 3:18 p.m., Daniel Vetter wrote:

On Fri, Mar 5, 2021 at 10:24 AM Michel Dänzer  wrote:


On 2021-03-04 7:26 p.m., Kazlauskas, Nicholas wrote:

On 2021-03-04 10:35 a.m., Michel Dänzer wrote:

On 2021-03-04 4:09 p.m., Kazlauskas, Nicholas wrote:

On 2021-03-04 4:05 a.m., Michel Dänzer wrote:

On 2021-03-03 8:17 p.m., Daniel Vetter wrote:

On Wed, Mar 3, 2021 at 5:53 PM Michel Dänzer 
wrote:


Moreover, in the same scenario plus an overlay plane enabled with a
HW cursor compatible format, if the FB bound to the overlay plane is
destroyed, the common DRM code will attempt to disable the overlay
plane, but dm_check_crtc_cursor will reject that now. I can't
remember
exactly what the result is, but AFAIR it's not pretty.


CRTC gets disabled instead. That's why we went with the "always
require primary plane" hack. I think the only solution here would be
to enable the primary plane (but not in userspace-visible state, so
this needs to be done in the dc derived state objects only) that scans
out black any time we're in such a situation with cursor with no
planes.


This is about a scenario described by Nicholas earlier:

Cursor Plane - ARGB

Overlay Plane - ARGB Desktop/UI with cutout for video

Primary Plane - NV12 video

And destroying the FB bound to the overlay plane. The fallback to
disable
the CRTC in atomic_remove_fb only kicks in for the primary plane, so it
wouldn't in this case and would fail. Which would in turn trigger the
WARN in drm_framebuffer_remove (and leave the overlay plane scanning
out
from freed memory?).


The cleanest solution might be not to allow any formats incompatible
with
the HW cursor for the primary plane.


Legacy X userspace doesn't use overlays but Chrome OS does.

This would regress ChromeOS MPO support because it relies on the NV12
video plane being on the bottom.


Could it use the NV12 overlay plane below the ARGB primary plane?


Plane ordering was previously undefined in DRM so we have userspace that
assumes overlays are on top.


They can still be by default?


Today we have the z-order property in DRM that defines where it is in
the stack, so technically it could but we'd also be regressing existing
behavior on Chrome OS today.


That's unfortunate, but might be the least bad choice overall.

BTW, doesn't Chrome OS try to disable the ARGB overlay plane while there are no 
UI elements to display? If it does, this series might break it anyway (if the 
cursor plane can be enabled while the ARGB overlay plane is off).



When ChromeOS disables MPO it doesn't do it plane by plane, it does it
in one go from NV12+ARGB -> ARGB8.


Even so, we cannot expect all user space to do the same, and we cannot
allow any user space to trigger a WARN and scanout from freed memory.


The WARN doesn't trigger because there's still a reference on the FB -


The WARN triggers if atomic_remove_fb returns an error, which is the case if it can't 
disable an overlay plane. I actually hit this with IGT tests while working on 
b836a274b797 "drm/amdgpu/dc: Require primary plane to be enabled whenever the CRTC 
is" (I initially tried allowing the cursor plane to be enabled together with an 
overlay plane while the primary plane is off).


the reference held by DRM since it's still scanning out the overlay.
Userspace can't reclaim this memory with another buffer allocation
because it's still in use.


Good point, so at least there's no scanout of freed memory. Even so, the 
overlay plane continues displaying contents which user space apparently doesn't 
want to be displayed anymore.


Hm I do wonder how much we need to care for this. If you use planes,
you better use TEST_ONLY in atomic to it's full extend (including
cursor, if that's a real plane, which it is for every driver except
msm/mdp4). If userspace screws this up and worse, shuts of planes with
an RMFB, I think it's not entirely unreasonable to claim that it
should keep the pieces.

So maybe we should refine the WARN_ON to not trigger if other planes
than crtc->primary and crtc->cursor are enabled right now?


It's a little odd that a disable commit can fail, but I don't think
there's anything in DRM core that specifies that this can't happen for
planes.


I'd say it's more than just a little odd. :) Being unable to disable an overlay 
plane seems very surprising, and could make it tricky for user space (not to 
mention core DRM code like atomic_remove_fb) to find a solution.

I'd suggest the amdgpu DM code should rather virtualize the KMS API planes 
somehow such that an overlay plane can always be disabled. While this might 
incur some short-term pain, it will likely save more pain overall in the long 
term.


Yeah I think this amd dc cursor problem is the first case where
removing a plane can make things worse.

Since the hw is what it is, ca

Re: [PATCH 2/4] drm/amdgpu/display: don't assert in set backlight function

2021-03-04 Thread Kazlauskas, Nicholas

On 2021-03-04 1:41 p.m., Alex Deucher wrote:

On Thu, Mar 4, 2021 at 1:33 PM Kazlauskas, Nicholas
 wrote:


On 2021-03-04 12:41 p.m., Alex Deucher wrote:

It just spams the logs.

Signed-off-by: Alex Deucher 


This series in general looks reasonable to me:
Reviewed-by: Nicholas Kazlauskas 


---
   drivers/gpu/drm/amd/display/dc/core/dc_link.c | 1 -
   1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index fa9a62dc174b..974b70f21837 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -2614,7 +2614,6 @@ bool dc_link_set_backlight_level(const struct dc_link 
*link,
   if (pipe_ctx->plane_state == NULL)
   frame_ramp = 0;
   } else {
- ASSERT(false);


Just a comment on what's actually going on here with this warning:

Technically we can't apply the backlight level without a plane_state in
the context but the panel is also off anyway.

I think there might be a bug here when the panel turns on and we're not
applying values set when it was off but I don't think anyone's reported
this as an issue.

I'm not entirely sure if the value gets cached and reapplied with the
correct value later, but it's something to keep in mind.


It doesn't.  I have additional patches here to cache it:
https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agd5f%2Flinux%2Flog%2F%3Fh%3Dbacklight_wip&data=04%7C01%7Cnicholas.kazlauskas%40amd.com%7Cf259e9290b0e4ffbc87308d8df3d3121%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637504801203988045%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=4ROSclecKkfu2km3YeeM7sZK%2FP%2BcC8BajHSxJXQQCRw%3D&reserved=0

Alex


That's aligned with my expectations then.

I can take a peek at the branch and help review the patches.

Regards,
Nicholas Kazlauskas





Regards,
Nicholas Kazlauskas


   return false;
   }




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cnicholas.kazlauskas%40amd.com%7Cf259e9290b0e4ffbc87308d8df3d3121%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637504801203988045%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2BbwO5oOXuXFWD%2F8qNTygROj%2B9ZItRsXJb1U7ilcICh4%3D&reserved=0


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/4] drm/amdgpu/display: don't assert in set backlight function

2021-03-04 Thread Kazlauskas, Nicholas

On 2021-03-04 12:41 p.m., Alex Deucher wrote:

It just spams the logs.

Signed-off-by: Alex Deucher 


This series in general looks reasonable to me:
Reviewed-by: Nicholas Kazlauskas 


---
  drivers/gpu/drm/amd/display/dc/core/dc_link.c | 1 -
  1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index fa9a62dc174b..974b70f21837 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -2614,7 +2614,6 @@ bool dc_link_set_backlight_level(const struct dc_link 
*link,
if (pipe_ctx->plane_state == NULL)
frame_ramp = 0;
} else {
-   ASSERT(false);


Just a comment on what's actually going on here with this warning:

Technically we can't apply the backlight level without a plane_state in 
the context but the panel is also off anyway.


I think there might be a bug here when the panel turns on and we're not 
applying values set when it was off but I don't think anyone's reported 
this as an issue.


I'm not entirely sure if the value gets cached and reapplied with the 
correct value later, but it's something to keep in mind.


Regards,
Nicholas Kazlauskas


return false;
}
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/6] amd/display: fail on cursor plane without an underlying plane

2021-03-04 Thread Kazlauskas, Nicholas

On 2021-03-04 10:35 a.m., Michel Dänzer wrote:

On 2021-03-04 4:09 p.m., Kazlauskas, Nicholas wrote:

On 2021-03-04 4:05 a.m., Michel Dänzer wrote:

On 2021-03-03 8:17 p.m., Daniel Vetter wrote:

On Wed, Mar 3, 2021 at 5:53 PM Michel Dänzer  wrote:


Moreover, in the same scenario plus an overlay plane enabled with a
HW cursor compatible format, if the FB bound to the overlay plane is
destroyed, the common DRM code will attempt to disable the overlay
plane, but dm_check_crtc_cursor will reject that now. I can't remember
exactly what the result is, but AFAIR it's not pretty.


CRTC gets disabled instead. That's why we went with the "always
require primary plane" hack. I think the only solution here would be
to enable the primary plane (but not in userspace-visible state, so
this needs to be done in the dc derived state objects only) that scans
out black any time we're in such a situation with cursor with no
planes.


This is about a scenario described by Nicholas earlier:

Cursor Plane - ARGB

Overlay Plane - ARGB Desktop/UI with cutout for video

Primary Plane - NV12 video

And destroying the FB bound to the overlay plane. The fallback to disable
the CRTC in atomic_remove_fb only kicks in for the primary plane, so it
wouldn't in this case and would fail. Which would in turn trigger the
WARN in drm_framebuffer_remove (and leave the overlay plane scanning out
from freed memory?).


The cleanest solution might be not to allow any formats incompatible with
the HW cursor for the primary plane.


Legacy X userspace doesn't use overlays but Chrome OS does.

This would regress ChromeOS MPO support because it relies on the NV12
video plane being on the bottom.


Could it use the NV12 overlay plane below the ARGB primary plane?


Plane ordering was previously undefined in DRM so we have userspace that 
assumes overlays are on top.


Today we have the z-order property in DRM that defines where it is in 
the stack, so technically it could but we'd also be regressing existing 
behavior on Chrome OS today.






When ChromeOS disables MPO it doesn't do it plane by plane, it does it
in one go from NV12+ARGB -> ARGB8.


Even so, we cannot expect all user space to do the same, and we cannot
allow any user space to trigger a WARN and scanout from freed memory.




The WARN doesn't trigger because there's still a reference on the FB - 
the reference held by DRM since it's still scanning out the overlay. 
Userspace can't reclaim this memory with another buffer allocation 
because it's still in use.


It's a little odd that a disable commit can fail, but I don't think 
there's anything in DRM core that specifies that this can't happen for 
planes.


Regards,
Nicholas Kazlauskas
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/6] amd/display: fail on cursor plane without an underlying plane

2021-03-04 Thread Kazlauskas, Nicholas

On 2021-03-04 4:05 a.m., Michel Dänzer wrote:

On 2021-03-03 8:17 p.m., Daniel Vetter wrote:

On Wed, Mar 3, 2021 at 5:53 PM Michel Dänzer  wrote:


On 2021-02-19 7:58 p.m., Simon Ser wrote:

Make sure there's an underlying pipe that can be used for the
cursor.

Signed-off-by: Simon Ser 
Cc: Alex Deucher 
Cc: Harry Wentland 
Cc: Nicholas Kazlauskas 
---
   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 ++-
   1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c

index acbe1537e7cf..a5d6010405bf 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -9226,9 +9226,14 @@ static int dm_check_crtc_cursor(struct 
drm_atomic_state *state,

   }

   new_cursor_state = drm_atomic_get_new_plane_state(state, 
crtc->cursor);
- if (!new_cursor_state || !new_underlying_state || 
!new_cursor_state->fb)

+ if (!new_cursor_state || !new_cursor_state->fb)
   return 0;

+ if (!new_underlying_state || !new_underlying_state->fb) {
+ drm_dbg_atomic(crtc->dev, "Cursor plane can't be 
enabled without underlying plane\n");

+ return -EINVAL;
+ }
+
   cursor_scale_w = new_cursor_state->crtc_w * 1000 /
    (new_cursor_state->src_w >> 16);
   cursor_scale_h = new_cursor_state->crtc_h * 1000 /



Houston, we have a problem I'm afraid. Adding Daniel.


If the primary plane is enabled with a format which isn't compatible 
with the HW cursor,
and no overlay plane is enabled, the same issues as described in 
b836a274b797
"drm/amdgpu/dc: Require primary plane to be enabled whenever the CRTC 
is" can again occur:



* The legacy cursor ioctl fails with EINVAL for a non-0 cursor FB ID
  (which enables the cursor plane).

* If the cursor plane is enabled (e.g. using the legacy cursor ioctl
  during DPMS off), changing the legacy DPMS property value from off to
  on fails with EINVAL.


atomic_check should still be run when the crtc is off, so the legacy
cursor ioctl should fail when dpms off in this case already.


Good point. This could already be problematic though. E.g. mutter treats
EINVAL from the cursor ioctl as the driver not supporting HW cursors at
all, so it falls back to SW cursor and never tries to use the HW cursor
again. (I don't think mutter could hit this particular case with an
incompatible format though, but there might be other similar user space)



Moreover, in the same scenario plus an overlay plane enabled with a
HW cursor compatible format, if the FB bound to the overlay plane is
destroyed, the common DRM code will attempt to disable the overlay
plane, but dm_check_crtc_cursor will reject that now. I can't remember
exactly what the result is, but AFAIR it's not pretty.


CRTC gets disabled instead. That's why we went with the "always
require primary plane" hack. I think the only solution here would be
to enable the primary plane (but not in userspace-visible state, so
this needs to be done in the dc derived state objects only) that scans
out black any time we're in such a situation with cursor with no
planes.


This is about a scenario described by Nicholas earlier:

Cursor Plane - ARGB

Overlay Plane - ARGB Desktop/UI with cutout for video

Primary Plane - NV12 video

And destroying the FB bound to the overlay plane. The fallback to disable
the CRTC in atomic_remove_fb only kicks in for the primary plane, so it
wouldn't in this case and would fail. Which would in turn trigger the
WARN in drm_framebuffer_remove (and leave the overlay plane scanning out
from freed memory?).


The cleanest solution might be not to allow any formats incompatible with
the HW cursor for the primary plane.




Legacy X userspace doesn't use overlays but Chrome OS does.

This would regress ChromeOS MPO support because it relies on the NV12 
video plane being on the bottom.


When ChromeOS disables MPO it doesn't do it plane by plane, it does it 
in one go from NV12+ARGB -> ARGB8.


Regards,
Nicholas Kazlauskas
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v6 3/3] drm/amd/display: Skip modeset for front porch change

2021-02-25 Thread Kazlauskas, Nicholas

On 2021-02-12 8:08 p.m., Aurabindo Pillai wrote:

[Why]
A seamless transition between modes can be performed if the new incoming
mode has the same timing parameters as the optimized mode on a display with a
variable vtotal min/max.

Smooth video playback usecases can be enabled with this seamless transition by
switching to a new mode which has a refresh rate matching the video.

[How]
Skip full modeset if userspace requested a compatible freesync mode which only
differs in the front porch timing from the current mode.

Signed-off-by: Aurabindo Pillai 
Acked-by: Christian König 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 220 ++
  1 file changed, 180 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index c472905c7d72..628fec855e14 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -212,6 +212,9 @@ static bool amdgpu_dm_psr_disable_all(struct 
amdgpu_display_manager *dm);
  static const struct drm_format_info *
  amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
  
+static bool

+is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+struct drm_crtc_state *new_crtc_state);
  /*
   * dm_vblank_get_counter
   *
@@ -335,6 +338,17 @@ static inline bool amdgpu_dm_vrr_active(struct 
dm_crtc_state *dm_state)
   dm_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED;
  }
  
+static inline bool is_dc_timing_adjust_needed(struct dm_crtc_state *old_state,

+ struct dm_crtc_state *new_state)
+{
+   if (new_state->freesync_config.state ==  VRR_STATE_ACTIVE_FIXED)
+   return true;
+   else if (amdgpu_dm_vrr_active(old_state) != 
amdgpu_dm_vrr_active(new_state))
+   return true;
+   else
+   return false;
+}
+
  /**
   * dm_pflip_high_irq() - Handle pageflip interrupt
   * @interrupt_params: ignored
@@ -5008,19 +5022,16 @@ static void 
fill_stream_properties_from_drm_display_mode(
timing_out->hdmi_vic = hv_frame.vic;
}
  
-	timing_out->h_addressable = mode_in->crtc_hdisplay;

-   timing_out->h_total = mode_in->crtc_htotal;
-   timing_out->h_sync_width =
-   mode_in->crtc_hsync_end - mode_in->crtc_hsync_start;
-   timing_out->h_front_porch =
-   mode_in->crtc_hsync_start - mode_in->crtc_hdisplay;
-   timing_out->v_total = mode_in->crtc_vtotal;
-   timing_out->v_addressable = mode_in->crtc_vdisplay;
-   timing_out->v_front_porch =
-   mode_in->crtc_vsync_start - mode_in->crtc_vdisplay;
-   timing_out->v_sync_width =
-   mode_in->crtc_vsync_end - mode_in->crtc_vsync_start;
-   timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+   timing_out->h_addressable = mode_in->hdisplay;
+   timing_out->h_total = mode_in->htotal;
+   timing_out->h_sync_width = mode_in->hsync_end - mode_in->hsync_start;
+   timing_out->h_front_porch = mode_in->hsync_start - mode_in->hdisplay;
+   timing_out->v_total = mode_in->vtotal;
+   timing_out->v_addressable = mode_in->vdisplay;
+   timing_out->v_front_porch = mode_in->vsync_start - mode_in->vdisplay;
+   timing_out->v_sync_width = mode_in->vsync_end - mode_in->vsync_start;
+   timing_out->pix_clk_100hz = mode_in->clock * 10;
+
timing_out->aspect_ratio = get_aspect_ratio(mode_in);
  
  	stream->output_color_space = get_output_color_space(timing_out);

@@ -5240,6 +5251,33 @@ get_highest_refresh_rate_mode(struct amdgpu_dm_connector 
*aconnector,
return m_pref;
  }
  
+static bool is_freesync_video_mode(struct drm_display_mode *mode,

+  struct amdgpu_dm_connector *aconnector)
+{
+   struct drm_display_mode *high_mode;
+   int timing_diff;
+
+   high_mode = get_highest_refresh_rate_mode(aconnector, false);
+   if (!high_mode || !mode)
+   return false;
+
+   timing_diff = high_mode->vtotal - mode->vtotal;
+
+   if (high_mode->clock == 0 || high_mode->clock != mode->clock ||
+   high_mode->hdisplay != mode->hdisplay ||
+   high_mode->vdisplay != mode->vdisplay ||
+   high_mode->hsync_start != mode->hsync_start ||
+   high_mode->hsync_end != mode->hsync_end ||
+   high_mode->htotal != mode->htotal ||
+   high_mode->hskew != mode->hskew ||
+   high_mode->vscan != mode->vscan ||
+   high_mode->vsync_start - mode->vsync_start != timing_diff ||
+   high_mode->vsync_end - mode->vsync_end != timing_diff)
+   return false;
+   else
+   return true;
+}
+
  static struct dc_stream_state *
  create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
   const struct drm_display_mode *drm_mode,
@@ -5253,8 +5291,10 @@

Re: [PATCH] drm/amdgpu/display: fix compilation when CONFIG_DRM_AMD_DC_DCN is not set

2021-02-23 Thread Kazlauskas, Nicholas

On 2021-02-23 10:22 a.m., Alex Deucher wrote:

Missing some CONFIG_DRM_AMD_DC_DCN ifdefs.

Fixes: 9d99a805a9a0 ("drm/amd/display: Fix system hang after multiple hotplugs")
Signed-off-by: Alex Deucher 
Cc: Stephen Rothwell 
Cc: Qingqing Zhuo 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 7a393eeae4b1..22443e696567 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5457,12 +5457,14 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
if (amdgpu_in_reset(adev))
return 0;
  
+#if defined(CONFIG_DRM_AMD_DC_DCN)

spin_lock_irqsave(&dm->vblank_lock, flags);
dm->vblank_workqueue->dm = dm;
dm->vblank_workqueue->otg_inst = acrtc->otg_inst;
dm->vblank_workqueue->enable = enable;
spin_unlock_irqrestore(&dm->vblank_lock, flags);
schedule_work(&dm->vblank_workqueue->mall_work);
+#endif
  
  	return 0;

  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: stream's id should reduced after stream destruct

2021-02-22 Thread Kazlauskas, Nicholas

On 2021-02-20 1:30 a.m., ZhiJie.Zhang wrote:

Signed-off-by: ZhiJie.Zhang 
---
  drivers/gpu/drm/amd/display/dc/core/dc_stream.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c 
b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index c103f858375d..dc7b7e57a86c 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -137,6 +137,8 @@ static void dc_stream_destruct(struct dc_stream_state 
*stream)
dc_transfer_func_release(stream->out_transfer_func);
stream->out_transfer_func = NULL;
}
+
+   stream->ctx->dc_stream_id_count--;


This is supposed to be a unique identifier so we shouldn't be reusing 
any old ID when creating a new stream.


Regards,
Nicholas Kazlauskas


  }
  
  void dc_stream_retain(struct dc_stream_state *stream)




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] amd/display: add cursor check for YUV primary plane

2021-02-19 Thread Kazlauskas, Nicholas

On 2021-02-19 12:29 p.m., Simon Ser wrote:

On Friday, February 19th, 2021 at 6:22 PM, Kazlauskas, Nicholas 
 wrote:


We can support cursor plane, but only if we have an overlay plane
enabled that's using XRGB/ARGB.

This is what we do on Chrome OS for video playback:

Cursor Plane - ARGB
Overlay Plane - ARGB Desktop/UI with cutout for video
Primary Plane - NV12 video

So this new check would break this usecase. It needs to check that there
isn't an XRGB/ARGB plane at the top of the blending chain instead.


Oh, interesting. I'll adjust the patch.

Related: how does this affect scaling? Right now there is a check that makes
sure the cursor plane scaling matches the primary plane's. Should we instead
check that the cursor plane scaling matches the top-most XRGB/ARGB plane's?



Can't really do scaling on the cursor plane itself. It scales with the 
underlying pipe driving it so it'll only be correct if it matches that.


Primary plane isn't the correct check here since we always use the 
topmost pipe in the blending chain to draw the cursor - in the example I 
gave it'd have to match the overlay plane's scaling, not the primary 
plane's.


Regards,
Nicholas Kazlauskas

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] amd/display: add cursor check for YUV primary plane

2021-02-19 Thread Kazlauskas, Nicholas

On 2021-02-19 11:19 a.m., Simon Ser wrote:

The cursor plane can't be displayed if the primary plane isn't
using an RGB format. Reject such atomic commits so that user-space
can have a fallback instead of an invisible cursor.

In theory we could support YUV if the cursor is also YUV, but at the
moment only ARGB cursors are supported.


Patch 1 looks good, but this patch needs to be adjusted.

We can support cursor plane, but only if we have an overlay plane 
enabled that's using XRGB/ARGB.


This is what we do on Chrome OS for video playback:

Cursor Plane - ARGB
Overlay Plane - ARGB Desktop/UI with cutout for video
Primary Plane - NV12 video

So this new check would break this usecase. It needs to check that there 
isn't an XRGB/ARGB plane at the top of the blending chain instead.


Regards,
Nicholas Kazlauskas



Signed-off-by: Simon Ser 
Cc: Alex Deucher 
Cc: Harry Wentland 
Cc: Nicholas Kazlauskas 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 4548b779bbce..f659e6cfdfcf 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -9239,6 +9239,13 @@ static int dm_check_crtc_cursor(struct drm_atomic_state 
*state,
return -EINVAL;
}
  
+	/* In theory we could probably support YUV cursors when the primary

+* plane uses a YUV format, but there's no use-case for it yet. */
+   if (new_primary_state->fb && new_primary_state->fb->format->is_yuv) {
+   drm_dbg_atomic(crtc->dev, "Cursor plane can't be used with YUV 
primary plane\n");
+   return -EINVAL;
+   }
+
return 0;
  }
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] Revert "drm/amd/display: reuse current context instead of recreating one"

2021-02-10 Thread Kazlauskas, Nicholas

On 2021-02-10 9:25 a.m., Alex Deucher wrote:

This reverts commit 8866a67ab86cc0812e65c04f1ef02bcc41e24d68.

This breaks hotplug of HDMI on some systems, resulting in
a blank screen.

Bug: https://bugzilla.kernel.org/show_bug.cgi?id=211649>> Signed-off-by: Alex Deucher 

---


Hotplug is still working from my side with this patch.

Same with our weekly testing reports that Daniel's been putting out.

This is probably something environment or configuration specific but I 
don't see any logs from the reporter. I'll follow up on the ticket but 
I'd like to understand the problem in more detail before reverting this.


Regards,
Nicholas Kazlauskas


  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 23 +---
  drivers/gpu/drm/amd/display/dc/core/dc.c  | 27 ++-
  drivers/gpu/drm/amd/display/dc/dc_stream.h|  3 ++-
  3 files changed, 23 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 961abf1cf040..e438baa1adc1 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1934,7 +1934,7 @@ static void dm_gpureset_commit_state(struct dc_state 
*dc_state,
dc_commit_updates_for_stream(
dm->dc, bundle->surface_updates,
dc_state->stream_status->plane_count,
-   dc_state->streams[k], &bundle->stream_update);
+   dc_state->streams[k], &bundle->stream_update, dc_state);
}
  
  cleanup:

@@ -1965,7 +1965,8 @@ static void dm_set_dpms_off(struct dc_link *link)
  
  	stream_update.stream = stream_state;

dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
-stream_state, &stream_update);
+stream_state, &stream_update,
+stream_state->ctx->dc->current_state);
mutex_unlock(&adev->dm.dc_lock);
  }
  
@@ -7548,7 +7549,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,

struct drm_crtc *pcrtc,
bool wait_for_vblank)
  {
-   int i;
+   uint32_t i;
uint64_t timestamp_ns;
struct drm_plane *plane;
struct drm_plane_state *old_plane_state, *new_plane_state;
@@ -7589,7 +7590,7 @@ static void amdgpu_dm_commit_planes(struct 
drm_atomic_state *state,
amdgpu_dm_commit_cursors(state);
  
  	/* update planes when needed */

-   for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, 
new_plane_state, i) {
+   for_each_oldnew_plane_in_state(state, plane, old_plane_state, 
new_plane_state, i) {
struct drm_crtc *crtc = new_plane_state->crtc;
struct drm_crtc_state *new_crtc_state;
struct drm_framebuffer *fb = new_plane_state->fb;
@@ -7812,7 +7813,8 @@ static void amdgpu_dm_commit_planes(struct 
drm_atomic_state *state,
 bundle->surface_updates,
 planes_count,
 acrtc_state->stream,
-&bundle->stream_update);
+&bundle->stream_update,
+dc_state);
  
  		/**

 * Enable or disable the interrupts on the backend.
@@ -8148,13 +8150,13 @@ static void amdgpu_dm_atomic_commit_tail(struct 
drm_atomic_state *state)
struct dm_connector_state *dm_new_con_state = 
to_dm_connector_state(new_con_state);
struct dm_connector_state *dm_old_con_state = 
to_dm_connector_state(old_con_state);
struct amdgpu_crtc *acrtc = 
to_amdgpu_crtc(dm_new_con_state->base.crtc);
-   struct dc_surface_update surface_updates[MAX_SURFACES];
+   struct dc_surface_update dummy_updates[MAX_SURFACES];
struct dc_stream_update stream_update;
struct dc_info_packet hdr_packet;
struct dc_stream_status *status = NULL;
bool abm_changed, hdr_changed, scaling_changed;
  
-		memset(&surface_updates, 0, sizeof(surface_updates));

+   memset(&dummy_updates, 0, sizeof(dummy_updates));
memset(&stream_update, 0, sizeof(stream_update));
  
  		if (acrtc) {

@@ -8211,15 +8213,16 @@ static void amdgpu_dm_atomic_commit_tail(struct 
drm_atomic_state *state)
 * To fix this, DC should permit updating only stream 
properties.
 */
for (j = 0; j < status->plane_count; j++)
-   surface_updates[j].surface = status->plane_states[j];
+   dummy_updates[j].surface = status->plane_states[0];
  
  
  		mutex_lock

Re: [PATCH] Revert "drm/amd/display: move edp sink present detection to hw init"

2021-02-08 Thread Kazlauskas, Nicholas

On 2021-02-08 2:25 p.m., Anson Jacob wrote:

This reverts commit de6571ecbb88643fa4bb4172e65c12795a2f3124.

Patch causes regression in resume time.


Shouldn't affect any system that has an eDP connector on the board since 
it's expected to be present in end user configuration.


If we want to replicate the same behavior we had before for eDP 
connector + eDP disconnected then we'd want to make sure we're skipping 
the registration for the connector in DM.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/core/dc.c | 40 +++-
  drivers/gpu/drm/amd/display/dc/dc_link.h |  2 --
  2 files changed, 18 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index c9aede2f783d..8d5378f53243 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -205,9 +205,27 @@ static bool create_links(
link = link_create(&link_init_params);
  
  		if (link) {

+   bool should_destory_link = false;
+
+   if (link->connector_signal == SIGNAL_TYPE_EDP) {
+   if (dc->config.edp_not_connected) {
+   if 
(!IS_DIAG_DC(dc->ctx->dce_environment))
+   should_destory_link = true;
+   } else {
+   enum dc_connection_type type;
+   dc_link_detect_sink(link, &type);
+   if (type == dc_connection_none)
+   should_destory_link = true;
+   }
+   }
+
+   if (dc->config.force_enum_edp || !should_destory_link) {
dc->links[dc->link_count] = link;
link->dc = dc;
++dc->link_count;
+   } else {
+   link_destroy(&link);
+   }
}
}
  
@@ -998,30 +1016,8 @@ struct dc *dc_create(const struct dc_init_data *init_params)

return NULL;
  }
  
-static void detect_edp_presence(struct dc *dc)

-{
-   struct dc_link *edp_link = get_edp_link(dc);
-   bool edp_sink_present = true;
-
-   if (!edp_link)
-   return;
-
-   if (dc->config.edp_not_connected) {
-   edp_sink_present = false;
-   } else {
-   enum dc_connection_type type;
-   dc_link_detect_sink(edp_link, &type);
-   if (type == dc_connection_none)
-   edp_sink_present = false;
-   }
-
-   edp_link->edp_sink_present = edp_sink_present;
-}
-
  void dc_hardware_init(struct dc *dc)
  {
-
-   detect_edp_presence(dc);
if (dc->ctx->dce_environment != DCE_ENV_VIRTUAL_HW)
dc->hwss.init_hw(dc);
  }
diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h 
b/drivers/gpu/drm/amd/display/dc/dc_link.h
index e189f16bc026..d5d8f0ad9233 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
@@ -103,8 +103,6 @@ struct dc_link {
bool lttpr_non_transparent_mode;
bool is_internal_display;
  
-	bool edp_sink_present;

-
/* caps is the same as reported_link_cap. link_traing use
 * reported_link_cap. Will clean up.  TODO
 */




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/3] drm/amd/display: Skip modeset for front porch change

2021-02-08 Thread Kazlauskas, Nicholas

On 2021-01-24 11:00 p.m., Aurabindo Pillai wrote:



On 2021-01-21 2:05 p.m., Kazlauskas, Nicholas wrote:

On 2021-01-19 10:50 a.m., Aurabindo Pillai wrote:

[Why]
A seamless transition between modes can be performed if the new incoming
mode has the same timing parameters as the optimized mode on a 
display with a

variable vtotal min/max.

Smooth video playback usecases can be enabled with this seamless 
transition by

switching to a new mode which has a refresh rate matching the video.

[How]
Skip full modeset if userspace requested a compatible freesync mode 
which only

differs in the front porch timing from the current mode.

Signed-off-by: Aurabindo Pillai 
Acked-by: Christian König 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 233 +++---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |   1 +
  2 files changed, 198 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c

index aaef2fb528fd..d66494cdd8c8 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -213,6 +213,9 @@ static bool amdgpu_dm_psr_disable_all(struct 
amdgpu_display_manager *dm);

  static const struct drm_format_info *
  amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
+static bool
+is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+ struct drm_crtc_state *new_crtc_state);
  /*
   * dm_vblank_get_counter
   *
@@ -4940,7 +4943,8 @@ static void 
fill_stream_properties_from_drm_display_mode(

  const struct drm_connector *connector,
  const struct drm_connector_state *connector_state,
  const struct dc_stream_state *old_stream,
-    int requested_bpc)
+    int requested_bpc,
+    bool is_in_modeset)
  {
  struct dc_crtc_timing *timing_out = &stream->timing;
  const struct drm_display_info *info = &connector->display_info;
@@ -4995,19 +4999,28 @@ static void 
fill_stream_properties_from_drm_display_mode(

  timing_out->hdmi_vic = hv_frame.vic;
  }
-    timing_out->h_addressable = mode_in->crtc_hdisplay;
-    timing_out->h_total = mode_in->crtc_htotal;
-    timing_out->h_sync_width =
-    mode_in->crtc_hsync_end - mode_in->crtc_hsync_start;
-    timing_out->h_front_porch =
-    mode_in->crtc_hsync_start - mode_in->crtc_hdisplay;
-    timing_out->v_total = mode_in->crtc_vtotal;
-    timing_out->v_addressable = mode_in->crtc_vdisplay;
-    timing_out->v_front_porch =
-    mode_in->crtc_vsync_start - mode_in->crtc_vdisplay;
-    timing_out->v_sync_width =
-    mode_in->crtc_vsync_end - mode_in->crtc_vsync_start;
-    timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+    if (is_in_modeset) {
+    timing_out->h_addressable = mode_in->hdisplay;
+    timing_out->h_total = mode_in->htotal;
+    timing_out->h_sync_width = mode_in->hsync_end - 
mode_in->hsync_start;
+    timing_out->h_front_porch = mode_in->hsync_start - 
mode_in->hdisplay;

+    timing_out->v_total = mode_in->vtotal;
+    timing_out->v_addressable = mode_in->vdisplay;
+    timing_out->v_front_porch = mode_in->vsync_start - 
mode_in->vdisplay;
+    timing_out->v_sync_width = mode_in->vsync_end - 
mode_in->vsync_start;

+    timing_out->pix_clk_100hz = mode_in->clock * 10;
+    } else {
+    timing_out->h_addressable = mode_in->crtc_hdisplay;
+    timing_out->h_total = mode_in->crtc_htotal;
+    timing_out->h_sync_width = mode_in->crtc_hsync_end - 
mode_in->crtc_hsync_start;
+    timing_out->h_front_porch = mode_in->crtc_hsync_start - 
mode_in->crtc_hdisplay;

+    timing_out->v_total = mode_in->crtc_vtotal;
+    timing_out->v_addressable = mode_in->crtc_vdisplay;
+    timing_out->v_front_porch = mode_in->crtc_vsync_start - 
mode_in->crtc_vdisplay;
+    timing_out->v_sync_width = mode_in->crtc_vsync_end - 
mode_in->crtc_vsync_start;

+    timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+    }
+


Not sure if I commented on this last time but I don't really 
understand what this is_in_modeset logic is supposed to be doing here.


This is so because create_stream_for_link() that ends up calling this 
function has two callers, one which is for stream validation in which 
the created stream is immediately discarded. The other is during 
modeset. Depending on these two cases, we want to copy the right timing 
parameters. With this method, major refactor wasn't necessary with the 
upper layers.


I don't understand why the timing parameters would change between what 
we validated and what we're planning on applying to the hardware. I 
think we should be validating the same thing in

Re: [PATCH 2/2] drm/amd/display: Fix HDMI deep color output for DCE 6-11.

2021-01-25 Thread Kazlauskas, Nicholas

On 2021-01-25 12:57 p.m., Alex Deucher wrote:

On Thu, Jan 21, 2021 at 1:17 AM Mario Kleiner
 wrote:


This fixes corrupted display output in HDMI deep color
10/12 bpc mode at least as observed on AMD Mullins, DCE-8.3.

It will hopefully also provide fixes for other DCE's up to
DCE-11, assuming those will need similar fixes, but i could
not test that for HDMI due to lack of suitable hw, so viewer
discretion is advised.

dce110_stream_encoder_hdmi_set_stream_attribute() is used for
HDMI setup on all DCE's and is missing color_depth assignment.

dce110_program_pix_clk() is used for pixel clock setup on HDMI
for DCE 6-11, and is missing color_depth assignment.

Additionally some of the underlying Atombios specific encoder
and pixelclock setup functions are missing code which is in
the classic amdgpu kms modesetting path and the in the radeon
kms driver for DCE6/DCE8.

encoder_control_digx_v3() - Was missing setup code wrt. amdgpu
and radeon kms classic drivers. Added here, but untested due to
lack of suitable test hw.

encoder_control_digx_v4() - Added missing setup code.
Successfully tested on AMD mullins / DCE-8.3 with HDMI deep color
output at 10 bpc and 12 bpc.

Note that encoder_control_digx_v5() has proper setup code in place
and is used, e.g., by DCE-11.2, but this code wasn't used for deep
color setup due to the missing cntl.color_depth setup in the calling
function for HDMI.

set_pixel_clock_v5() - Missing setup code wrt. classic amdgpu/radeon
kms. Added here, but untested due to lack of hw.

set_pixel_clock_v6() - Missing setup code added. Successfully tested
on AMD mullins DCE-8.3. This fixes corrupted display output at HDMI
deep color output with 10 bpc or 12 bpc.

Fixes: 4562236b3bc0 ("drm/amd/dc: Add dc display driver (v2)")

Signed-off-by: Mario Kleiner 
Cc: Harry Wentland 


These make sense. I've applied the series.  I'll let the display guys
gauge the other points in your cover letter.

Alex


I don't have any concerns with this patch.

Even though it's already applied feel free to have my:

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas





---
  .../drm/amd/display/dc/bios/command_table.c   | 61 +++
  .../drm/amd/display/dc/dce/dce_clock_source.c | 14 +
  .../amd/display/dc/dce/dce_stream_encoder.c   |  1 +
  3 files changed, 76 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c 
b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
index 070459e3e407..afc10b954ffa 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
@@ -245,6 +245,23 @@ static enum bp_result encoder_control_digx_v3(
 cntl->enable_dp_audio);
 params.ucLaneNum = (uint8_t)(cntl->lanes_number);

+   switch (cntl->color_depth) {
+   case COLOR_DEPTH_888:
+   params.ucBitPerColor = PANEL_8BIT_PER_COLOR;
+   break;
+   case COLOR_DEPTH_101010:
+   params.ucBitPerColor = PANEL_10BIT_PER_COLOR;
+   break;
+   case COLOR_DEPTH_121212:
+   params.ucBitPerColor = PANEL_12BIT_PER_COLOR;
+   break;
+   case COLOR_DEPTH_161616:
+   params.ucBitPerColor = PANEL_16BIT_PER_COLOR;
+   break;
+   default:
+   break;
+   }
+
 if (EXEC_BIOS_CMD_TABLE(DIGxEncoderControl, params))
 result = BP_RESULT_OK;

@@ -274,6 +291,23 @@ static enum bp_result encoder_control_digx_v4(
 cntl->enable_dp_audio));
 params.ucLaneNum = (uint8_t)(cntl->lanes_number);

+   switch (cntl->color_depth) {
+   case COLOR_DEPTH_888:
+   params.ucBitPerColor = PANEL_8BIT_PER_COLOR;
+   break;
+   case COLOR_DEPTH_101010:
+   params.ucBitPerColor = PANEL_10BIT_PER_COLOR;
+   break;
+   case COLOR_DEPTH_121212:
+   params.ucBitPerColor = PANEL_12BIT_PER_COLOR;
+   break;
+   case COLOR_DEPTH_161616:
+   params.ucBitPerColor = PANEL_16BIT_PER_COLOR;
+   break;
+   default:
+   break;
+   }
+
 if (EXEC_BIOS_CMD_TABLE(DIGxEncoderControl, params))
 result = BP_RESULT_OK;

@@ -1057,6 +1091,19 @@ static enum bp_result set_pixel_clock_v5(
  * driver choose program it itself, i.e. here we program it
  * to 888 by default.
  */
+   if (bp_params->signal_type == SIGNAL_TYPE_HDMI_TYPE_A)
+   switch (bp_params->color_depth) {
+   case TRANSMITTER_COLOR_DEPTH_30:
+   /* yes this is correct, the atom define is 
wrong */
+   clk.sPCLKInput.ucMiscInfo |= 
PIXEL_CLOCK_V5_MISC_HDMI_32BPP;
+   break;
+   case TRANSMITTER_COLOR_DEPTH_36:
+

Re: [PATCH 3/3] drm/amd/display: Skip modeset for front porch change

2021-01-21 Thread Kazlauskas, Nicholas

On 2021-01-19 10:50 a.m., Aurabindo Pillai wrote:

[Why]
A seamless transition between modes can be performed if the new incoming
mode has the same timing parameters as the optimized mode on a display with a
variable vtotal min/max.

Smooth video playback usecases can be enabled with this seamless transition by
switching to a new mode which has a refresh rate matching the video.

[How]
Skip full modeset if userspace requested a compatible freesync mode which only
differs in the front porch timing from the current mode.

Signed-off-by: Aurabindo Pillai 
Acked-by: Christian König 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 233 +++---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |   1 +
  2 files changed, 198 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index aaef2fb528fd..d66494cdd8c8 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -213,6 +213,9 @@ static bool amdgpu_dm_psr_disable_all(struct 
amdgpu_display_manager *dm);
  static const struct drm_format_info *
  amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
  
+static bool

+is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+struct drm_crtc_state *new_crtc_state);
  /*
   * dm_vblank_get_counter
   *
@@ -4940,7 +4943,8 @@ static void fill_stream_properties_from_drm_display_mode(
const struct drm_connector *connector,
const struct drm_connector_state *connector_state,
const struct dc_stream_state *old_stream,
-   int requested_bpc)
+   int requested_bpc,
+   bool is_in_modeset)
  {
struct dc_crtc_timing *timing_out = &stream->timing;
const struct drm_display_info *info = &connector->display_info;
@@ -4995,19 +4999,28 @@ static void 
fill_stream_properties_from_drm_display_mode(
timing_out->hdmi_vic = hv_frame.vic;
}
  
-	timing_out->h_addressable = mode_in->crtc_hdisplay;

-   timing_out->h_total = mode_in->crtc_htotal;
-   timing_out->h_sync_width =
-   mode_in->crtc_hsync_end - mode_in->crtc_hsync_start;
-   timing_out->h_front_porch =
-   mode_in->crtc_hsync_start - mode_in->crtc_hdisplay;
-   timing_out->v_total = mode_in->crtc_vtotal;
-   timing_out->v_addressable = mode_in->crtc_vdisplay;
-   timing_out->v_front_porch =
-   mode_in->crtc_vsync_start - mode_in->crtc_vdisplay;
-   timing_out->v_sync_width =
-   mode_in->crtc_vsync_end - mode_in->crtc_vsync_start;
-   timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+   if (is_in_modeset) {
+   timing_out->h_addressable = mode_in->hdisplay;
+   timing_out->h_total = mode_in->htotal;
+   timing_out->h_sync_width = mode_in->hsync_end - 
mode_in->hsync_start;
+   timing_out->h_front_porch = mode_in->hsync_start - 
mode_in->hdisplay;
+   timing_out->v_total = mode_in->vtotal;
+   timing_out->v_addressable = mode_in->vdisplay;
+   timing_out->v_front_porch = mode_in->vsync_start - 
mode_in->vdisplay;
+   timing_out->v_sync_width = mode_in->vsync_end - 
mode_in->vsync_start;
+   timing_out->pix_clk_100hz = mode_in->clock * 10;
+   } else {
+   timing_out->h_addressable = mode_in->crtc_hdisplay;
+   timing_out->h_total = mode_in->crtc_htotal;
+   timing_out->h_sync_width = mode_in->crtc_hsync_end - 
mode_in->crtc_hsync_start;
+   timing_out->h_front_porch = mode_in->crtc_hsync_start - 
mode_in->crtc_hdisplay;
+   timing_out->v_total = mode_in->crtc_vtotal;
+   timing_out->v_addressable = mode_in->crtc_vdisplay;
+   timing_out->v_front_porch = mode_in->crtc_vsync_start - 
mode_in->crtc_vdisplay;
+   timing_out->v_sync_width = mode_in->crtc_vsync_end - 
mode_in->crtc_vsync_start;
+   timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+   }
+


Not sure if I commented on this last time but I don't really understand 
what this is_in_modeset logic is supposed to be doing here.


We should be modifying crtc_vsync_* for the generated modes, no? Not 
just the vsync_* parameters.



timing_out->aspect_ratio = get_aspect_ratio(mode_in);
  
  	stream->output_color_space = get_output_color_space(timing_out);

@@ -5227,6 +5240,33 @@ get_highest_refresh_rate_mode(struct amdgpu_dm_connector 
*aconnector,
return m_pref;
  }
  
+static bool is_freesync_video_mode(struct drm_display_mode *mode,

+  struct amdgpu_dm_connector *aconnector)
+{
+   struct drm_display_mode *high_mode;
+   int timing_diff;
+
+   high_mode = get_highest_refresh_rate_mode(aconnector, false);
+   if (!high_mode || !mode)
+   return 

Re: [PATCH] drm/amd/display: Implement functions to let DC allocate GPU memory

2021-01-20 Thread Kazlauskas, Nicholas

On 2021-01-20 5:26 a.m., Christian König wrote:

Am 19.01.21 um 21:40 schrieb Bhawanpreet Lakha:

From: Harry Wentland 

[Why]
DC needs to communicate with PM FW through GPU memory. In order
to do so we need to be able to allocate memory from within DC.

[How]
Call amdgpu_bo_create_kernel to allocate GPU memory and use a
list in amdgpu_display_manager to track our allocations so we
can clean them up later.


Well that looks like classic mid-layering to me with a horrible 
implementation of the free function.


Christian.


FWIW this is only really used during device creation and destruction so 
the overhead of the free function isn't a considerable concern.


Does AMDGPU always need to know the GPU address for the allocation to 
free or should we work on fixing the callsites for this to pass it down?


We generally separate the CPU/GPU pointer but maybe it'd be best to have 
some sort of allocation object here that has both for DC's purposes.


Regards,
Nicholas Kazlauskas





Signed-off-by: Harry Wentland 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  2 +
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |  9 +
  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 40 +--
  3 files changed, 48 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c

index e490fc2486f7..83ec92a69cba 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1017,6 +1017,8 @@ static int amdgpu_dm_init(struct amdgpu_device 
*adev)

  init_data.soc_bounding_box = adev->dm.soc_bounding_box;
+    INIT_LIST_HEAD(&adev->dm.da_list);
+
  /* Display Core create. */
  adev->dm.dc = dc_create(&init_data);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h

index 38bc0f88b29c..49137924a855 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -130,6 +130,13 @@ struct amdgpu_dm_backlight_caps {
  bool aux_support;
  };
+struct dal_allocation {
+    struct list_head list;
+    struct amdgpu_bo *bo;
+    void *cpu_ptr;
+    u64 gpu_addr;
+};
+
  /**
   * struct amdgpu_display_manager - Central amdgpu display manager 
device

   *
@@ -350,6 +357,8 @@ struct amdgpu_display_manager {
   */
  struct amdgpu_encoder mst_encoders[AMDGPU_DM_MAX_CRTC];
  bool force_timing_sync;
+
+    struct list_head da_list;
  };
  enum dsc_clock_force_state {
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c

index 3244a6ea7a65..5dc426e6e785 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -652,8 +652,31 @@ void *dm_helpers_allocate_gpu_mem(
  size_t size,
  long long *addr)
  {
-    // TODO
-    return NULL;
+    struct amdgpu_device *adev = ctx->driver_context;
+    struct dal_allocation *da;
+    u32 domain = (type == DC_MEM_ALLOC_TYPE_GART) ?
+    AMDGPU_GEM_DOMAIN_GTT : AMDGPU_GEM_DOMAIN_VRAM;
+    int ret;
+
+    da = kzalloc(sizeof(struct dal_allocation), GFP_KERNEL);
+    if (!da)
+    return NULL;
+
+    ret = amdgpu_bo_create_kernel(adev, size, PAGE_SIZE,
+  domain, &da->bo,
+  &da->gpu_addr, &da->cpu_ptr);
+
+    *addr = da->gpu_addr;
+
+    if (ret) {
+    kfree(da);
+    return NULL;
+    }
+
+    /* add da to list in dm */
+    list_add(&da->list, &adev->dm.da_list);
+
+    return da->cpu_ptr;
  }
  void dm_helpers_free_gpu_mem(
@@ -661,5 +684,16 @@ void dm_helpers_free_gpu_mem(
  enum dc_gpu_mem_alloc_type type,
  void *pvMem)
  {
-    // TODO
+    struct amdgpu_device *adev = ctx->driver_context;
+    struct dal_allocation *da;
+
+    /* walk the da list in DM */
+    list_for_each_entry(da, &adev->dm.da_list, list) {
+    if (pvMem == da->cpu_ptr) {
+    amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+    list_del(&da->list);
+    kfree(da);
+    break;
+    }
+    }
  }


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cnicholas.kazlauskas%40amd.com%7C65096f1a05bf4379c1cd08d8bd2dd5a1%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637467351862623818%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Krf3epZ%2FsAWOmHRhUSukqEjKJJBLwtKNEe7GWYKBG1w%3D&reserved=0 



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Implement functions to let DC allocate GPU memory

2021-01-19 Thread Kazlauskas, Nicholas

On 2021-01-19 3:40 p.m., Bhawanpreet Lakha wrote:

From: Harry Wentland 

[Why]
DC needs to communicate with PM FW through GPU memory. In order
to do so we need to be able to allocate memory from within DC.

[How]
Call amdgpu_bo_create_kernel to allocate GPU memory and use a
list in amdgpu_display_manager to track our allocations so we
can clean them up later.

Signed-off-by: Harry Wentland 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |  2 +
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |  9 +
  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 40 +--
  3 files changed, 48 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e490fc2486f7..83ec92a69cba 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1017,6 +1017,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
  
  	init_data.soc_bounding_box = adev->dm.soc_bounding_box;
  
+	INIT_LIST_HEAD(&adev->dm.da_list);

+
/* Display Core create. */
adev->dm.dc = dc_create(&init_data);
  
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h

index 38bc0f88b29c..49137924a855 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -130,6 +130,13 @@ struct amdgpu_dm_backlight_caps {
bool aux_support;
  };
  
+struct dal_allocation {

+   struct list_head list;
+   struct amdgpu_bo *bo;
+   void *cpu_ptr;
+   u64 gpu_addr;
+};
+
  /**
   * struct amdgpu_display_manager - Central amdgpu display manager device
   *
@@ -350,6 +357,8 @@ struct amdgpu_display_manager {
 */
struct amdgpu_encoder mst_encoders[AMDGPU_DM_MAX_CRTC];
bool force_timing_sync;
+
+   struct list_head da_list;
  };
  
  enum dsc_clock_force_state {

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
index 3244a6ea7a65..5dc426e6e785 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -652,8 +652,31 @@ void *dm_helpers_allocate_gpu_mem(
size_t size,
long long *addr)
  {
-   // TODO
-   return NULL;
+   struct amdgpu_device *adev = ctx->driver_context;
+   struct dal_allocation *da;
+   u32 domain = (type == DC_MEM_ALLOC_TYPE_GART) ?
+   AMDGPU_GEM_DOMAIN_GTT : AMDGPU_GEM_DOMAIN_VRAM;
+   int ret;
+
+   da = kzalloc(sizeof(struct dal_allocation), GFP_KERNEL);
+   if (!da)
+   return NULL;
+
+   ret = amdgpu_bo_create_kernel(adev, size, PAGE_SIZE,
+ domain, &da->bo,
+ &da->gpu_addr, &da->cpu_ptr);
+
+   *addr = da->gpu_addr;
+
+   if (ret) {
+   kfree(da);
+   return NULL;
+   }
+
+   /* add da to list in dm */
+   list_add(&da->list, &adev->dm.da_list);
+
+   return da->cpu_ptr;
  }
  
  void dm_helpers_free_gpu_mem(

@@ -661,5 +684,16 @@ void dm_helpers_free_gpu_mem(
enum dc_gpu_mem_alloc_type type,
void *pvMem)
  {
-   // TODO
+   struct amdgpu_device *adev = ctx->driver_context;
+   struct dal_allocation *da;
+
+   /* walk the da list in DM */
+   list_for_each_entry(da, &adev->dm.da_list, list) {
+   if (pvMem == da->cpu_ptr) {
+   amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, 
&da->cpu_ptr);
+   list_del(&da->list);
+   kfree(da);
+   break;
+   }
+   }
  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/3] drm/amd/display: Update dcn30_apply_idle_power_optimizations() code

2021-01-19 Thread Kazlauskas, Nicholas

On 2021-01-19 3:38 p.m., Bhawanpreet Lakha wrote:

Update the function for idle optimizations
-remove hardcoded size
-enable no memory-request case
-add cursor copy
-update mall eligibility check case

Signed-off-by: Bhawanpreet Lakha 
Signed-off-by: Joshua Aberback 


Series is:

Reviewed-by: Nicholas Kazlauskas 

Though you might want to update patch 1's commit message to explain a 
little more detail about watermark set D.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/dc.h   |   2 +
  .../drm/amd/display/dc/dcn30/dcn30_hwseq.c| 157 +-
  .../amd/display/dc/dcn302/dcn302_resource.c   |   4 +-
  .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |   5 +
  4 files changed, 129 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h 
b/drivers/gpu/drm/amd/display/dc/dc.h
index e21d4602e427..71d46ade24e5 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -502,6 +502,8 @@ struct dc_debug_options {
  #if defined(CONFIG_DRM_AMD_DC_DCN)
bool disable_idle_power_optimizations;
unsigned int mall_size_override;
+   unsigned int mall_additional_timer_percent;
+   bool mall_error_as_fatal;
  #endif
bool dmub_command_table; /* for testing only */
struct dc_bw_validation_profile bw_val_profile;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c 
b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
index 5c546b06f551..dff83c6a142a 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
@@ -710,8 +710,11 @@ void dcn30_program_dmdata_engine(struct pipe_ctx *pipe_ctx)
  bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
  {
union dmub_rb_cmd cmd;
-   unsigned int surface_size, refresh_hz, denom;
uint32_t tmr_delay = 0, tmr_scale = 0;
+   struct dc_cursor_attributes cursor_attr;
+   bool cursor_cache_enable = false;
+   struct dc_stream_state *stream = NULL;
+   struct dc_plane_state *plane = NULL;
  
  	if (!dc->ctx->dmub_srv)

return false;
@@ -722,72 +725,150 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, 
bool enable)
  
  			/* First, check no-memory-requests case */

for (i = 0; i < dc->current_state->stream_count; i++) {
-   if (dc->current_state->stream_status[i]
-   .plane_count)
+   if 
(dc->current_state->stream_status[i].plane_count)
/* Fail eligibility on a visible stream 
*/
break;
}
  
-			if (dc->current_state->stream_count == 1 // single display only

-   && dc->current_state->stream_status[0].plane_count 
== 1 // single surface only
-   && 
dc->current_state->stream_status[0].plane_states[0]->address.page_table_base.quad_part 
== 0 // no VM
-   // Only 8 and 16 bit formats
-   && 
dc->current_state->stream_status[0].plane_states[0]->format <= 
SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F
-   && 
dc->current_state->stream_status[0].plane_states[0]->format >= 
SURFACE_PIXEL_FORMAT_GRPH_ARGB) {
-   surface_size = 
dc->current_state->stream_status[0].plane_states[0]->plane_size.surface_pitch *
-   
dc->current_state->stream_status[0].plane_states[0]->plane_size.surface_size.height
 *
-   
(dc->current_state->stream_status[0].plane_states[0]->format >= 
SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616 ?
-8 : 4);
-   } else {
-   // TODO: remove hard code size
-   surface_size = 128 * 1024 * 1024;
+   if (i == dc->current_state->stream_count) {
+   /* Enable no-memory-requests case */
+   memset(&cmd, 0, sizeof(cmd));
+   cmd.mall.header.type = DMUB_CMD__MALL;
+   cmd.mall.header.sub_type = 
DMUB_CMD__MALL_ACTION_NO_DF_REQ;
+   cmd.mall.header.payload_bytes = 
sizeof(cmd.mall) - sizeof(cmd.mall.header);
+
+   dc_dmub_srv_cmd_queue(dc->ctx->dmub_srv, &cmd);
+   dc_dmub_srv_cmd_execute(dc->ctx->dmub_srv);
+
+   return true;
}
  
-			// TODO: remove hard code size

-   if (surface_size < 128 * 1024 * 1024) {
-   refresh_hz = div_u64((unsigned long long) 
dc->current_state->streams[0]->timing.pix_clk_100hz *
- 

Re: [PATCH] drm/amd/display: Fix deadlock during gpu reset v3

2021-01-12 Thread Kazlauskas, Nicholas

On 2021-01-12 11:13 a.m., Bhawanpreet Lakha wrote:

[Why]
during idle optimizations we acquire the dc_lock, this lock is also
acquired during gpu_reset so we end up hanging the system due to a
deadlock

[How]
If we are in gpu reset:
  - disable idle optimizations and skip calls to the dc function

v2: skip idle optimizations calls
v3: add guard for DCN

Fixes: 06d5652541c3 ("drm/amd/display: enable idle optimizations for linux (MALL 
stutter)")
Signed-off-by: Bhawanpreet Lakha 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 ++
  1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index de71b6c21590..82ceb0a8ba29 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1816,6 +1816,11 @@ static int dm_suspend(void *handle)
  
  	if (amdgpu_in_reset(adev)) {

mutex_lock(&dm->dc_lock);
+
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+   dc_allow_idle_optimizations(adev->dm.dc, false);
+#endif
+
dm->cached_dc_state = dc_copy_state(dm->dc->current_state);
  
  		dm_gpureset_toggle_interrupts(adev, dm->cached_dc_state, false);

@@ -5556,6 +5561,10 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
if (!dc_interrupt_set(adev->dm.dc, irq_source, enable))
return -EBUSY;
  
+#if defined(CONFIG_DRM_AMD_DC_DCN)

+   if (amdgpu_in_reset(adev))
+   return 0;
+
mutex_lock(&dm->dc_lock);
  
  	if (enable)

@@ -5572,6 +5581,7 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
  
  	mutex_unlock(&dm->dc_lock);
  
+#endif

return 0;
  }
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix deadlock during gpu reset

2021-01-11 Thread Kazlauskas, Nicholas

On 2021-01-11 2:55 p.m., Bhawanpreet Lakha wrote:

[Why]
during idle optimizations we acquire the dc_lock, this lock is also
acquired during gpu_reset so we end up hanging the system due to a
deadlock

[How]
If we are in gpu reset dont acquire the dc lock, as we already have it


Are we sure this is the behavior we want?

I think if we are in GPU reset then we shouldn't be attempting to modify 
idle optimization state at all, ie. return early if amdgpu_in_reset.


The calls around the locks are working around bad policy.

Regards,
Nicholas Kazlauskas



Fixes: 06d5652541c3 ("drm/amd/display: enable idle optimizations for linux (MALL 
stutter)")
Signed-off-by: Bhawanpreet Lakha 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 --
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 99c7f9eb44aa..2170e1b2d32c 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5556,7 +5556,8 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
if (!dc_interrupt_set(adev->dm.dc, irq_source, enable))
return -EBUSY;
  
-	mutex_lock(&dm->dc_lock);

+   if (!amdgpu_in_reset(adev))
+   mutex_lock(&dm->dc_lock);
  
  	if (enable)

dm->active_vblank_irq_count++;
@@ -5568,7 +5569,8 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
  
  	DRM_DEBUG_DRIVER("Allow idle optimizations (MALL): %d\n", dm->active_vblank_irq_count == 0);
  
-	mutex_unlock(&dm->dc_lock);

+   if (!amdgpu_in_reset(adev))
+   mutex_unlock(&dm->dc_lock);
  
  	return 0;

  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: fix build with CONFIG_DRM_AMD_DC_DCN disabled

2021-01-08 Thread Kazlauskas, Nicholas

On 2021-01-08 11:33 a.m., Alex Deucher wrote:

dc_allow_idle_optimizations() needs to be protected by
CONFIG_DRM_AMD_DC_DCN.

Reported-by: Stephen Rothwell 
Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 318eb12f8de7..2dc8493793e0 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5490,10 +5490,12 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, 
bool enable)
else
dm->active_vblank_irq_count--;
  
+#if defined(CONFIG_DRM_AMD_DC_DCN)

dc_allow_idle_optimizations(
adev->dm.dc, dm->active_vblank_irq_count == 0 ? true : false);
  
  	DRM_DEBUG_DRIVER("Allow idle optimizations (MALL): %d\n", dm->active_vblank_irq_count == 0);

+#endif
  
  	mutex_unlock(&dm->dc_lock);
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 3/3] drm/amd/display: Skip modeset for front porch change

2021-01-06 Thread Kazlauskas, Nicholas

On 2021-01-04 4:08 p.m., Aurabindo Pillai wrote:

[Why&How]
Inorder to enable freesync video mode, driver adds extra
modes based on preferred modes for common freesync frame rates.
When commiting these mode changes, a full modeset is not needed.
If the change in only in the front porch timing value, skip full
modeset and continue using the same stream.

Signed-off-by: Aurabindo Pillai 
Acked-by: Christian König 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 219 +++---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |   1 +
  2 files changed, 188 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index aaef2fb528fd..315756207f0f 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -213,6 +213,9 @@ static bool amdgpu_dm_psr_disable_all(struct 
amdgpu_display_manager *dm);
  static const struct drm_format_info *
  amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
  
+static bool

+is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+struct drm_crtc_state *new_crtc_state);
  /*
   * dm_vblank_get_counter
   *
@@ -4940,7 +4943,8 @@ static void fill_stream_properties_from_drm_display_mode(
const struct drm_connector *connector,
const struct drm_connector_state *connector_state,
const struct dc_stream_state *old_stream,
-   int requested_bpc)
+   int requested_bpc,
+   bool is_in_modeset)
  {
struct dc_crtc_timing *timing_out = &stream->timing;
const struct drm_display_info *info = &connector->display_info;
@@ -4995,19 +4999,28 @@ static void 
fill_stream_properties_from_drm_display_mode(
timing_out->hdmi_vic = hv_frame.vic;
}
  
-	timing_out->h_addressable = mode_in->crtc_hdisplay;

-   timing_out->h_total = mode_in->crtc_htotal;
-   timing_out->h_sync_width =
-   mode_in->crtc_hsync_end - mode_in->crtc_hsync_start;
-   timing_out->h_front_porch =
-   mode_in->crtc_hsync_start - mode_in->crtc_hdisplay;
-   timing_out->v_total = mode_in->crtc_vtotal;
-   timing_out->v_addressable = mode_in->crtc_vdisplay;
-   timing_out->v_front_porch =
-   mode_in->crtc_vsync_start - mode_in->crtc_vdisplay;
-   timing_out->v_sync_width =
-   mode_in->crtc_vsync_end - mode_in->crtc_vsync_start;
-   timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+   if (is_in_modeset) {
+   timing_out->h_addressable = mode_in->hdisplay;
+   timing_out->h_total = mode_in->htotal;
+   timing_out->h_sync_width = mode_in->hsync_end - 
mode_in->hsync_start;
+   timing_out->h_front_porch = mode_in->hsync_start - 
mode_in->hdisplay;
+   timing_out->v_total = mode_in->vtotal;
+   timing_out->v_addressable = mode_in->vdisplay;
+   timing_out->v_front_porch = mode_in->vsync_start - 
mode_in->vdisplay;
+   timing_out->v_sync_width = mode_in->vsync_end - 
mode_in->vsync_start;
+   timing_out->pix_clk_100hz = mode_in->clock * 10;
+   } else {
+   timing_out->h_addressable = mode_in->crtc_hdisplay;
+   timing_out->h_total = mode_in->crtc_htotal;
+   timing_out->h_sync_width = mode_in->crtc_hsync_end - 
mode_in->crtc_hsync_start;
+   timing_out->h_front_porch = mode_in->crtc_hsync_start - 
mode_in->crtc_hdisplay;
+   timing_out->v_total = mode_in->crtc_vtotal;
+   timing_out->v_addressable = mode_in->crtc_vdisplay;
+   timing_out->v_front_porch = mode_in->crtc_vsync_start - 
mode_in->crtc_vdisplay;
+   timing_out->v_sync_width = mode_in->crtc_vsync_end - 
mode_in->crtc_vsync_start;
+   timing_out->pix_clk_100hz = mode_in->crtc_clock * 10;
+   }
+
timing_out->aspect_ratio = get_aspect_ratio(mode_in);
  
  	stream->output_color_space = get_output_color_space(timing_out);

@@ -5227,6 +5240,33 @@ get_highest_refresh_rate_mode(struct amdgpu_dm_connector 
*aconnector,
return m_pref;
  }
  
+static bool is_freesync_video_mode(struct drm_display_mode *mode,

+  struct amdgpu_dm_connector *aconnector)
+{
+   struct drm_display_mode *high_mode;
+   int timing_diff;
+
+   high_mode = get_highest_refresh_rate_mode(aconnector, false);
+   if (!high_mode || !mode)
+   return false;
+
+   timing_diff = high_mode->vtotal - mode->vtotal;
+
+   if (high_mode->clock == 0 || high_mode->clock != mode->clock ||
+   high_mode->hdisplay != mode->hdisplay ||
+   high_mode->vdisplay != mode->vdisplay ||
+   high_mode->hsync_start != mode->hsync_start ||
+   high_mode->hsync_end != mode->hsync_end ||
+   high_mode->htotal != mode->htotal ||
+

Re: [PATCH AUTOSEL 5.4 006/130] drm/amd/display: Do not silently accept DCC for multiplane formats.

2021-01-04 Thread Kazlauskas, Nicholas

On 2020-12-29 9:54 a.m., Deucher, Alexander wrote:

[AMD Public Use]


I don't know if these fixes related to modifiers make sense in the 
pre-modifier code base.  Bas, Nick?


Alex


Mesa should be the only userspace trying to make use of DCC and it 
doesn't do it for video formats. From the kernel side of things we've 
also never supported this and you'd get corruption on the screen if you 
tried.


It's a "fix" for both pre-modifiers and post-modifiers code.

Regards,
Nicholas Kazlauskas



*From:* amd-gfx  on behalf of 
Sasha Levin 

*Sent:* Tuesday, December 22, 2020 9:16 PM
*To:* linux-ker...@vger.kernel.org ; 
sta...@vger.kernel.org 
*Cc:* Sasha Levin ; dri-de...@lists.freedesktop.org 
; amd-gfx@lists.freedesktop.org 
; Bas Nieuwenhuizen 
; Deucher, Alexander 
; Kazlauskas, Nicholas 

*Subject:* [PATCH AUTOSEL 5.4 006/130] drm/amd/display: Do not silently 
accept DCC for multiplane formats.

From: Bas Nieuwenhuizen 

[ Upstream commit b35ce7b364ec80b54f48a8fdf9fb74667774d2da ]

Silently accepting it could result in corruption.

Signed-off-by: Bas Nieuwenhuizen 
Reviewed-by: Alex Deucher 
Reviewed-by: Nicholas Kazlauskas 
Signed-off-by: Alex Deucher 
Signed-off-by: Sasha Levin 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c

index d2dd387c95d86..ce70c42a2c3ec 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -2734,7 +2734,7 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
  return 0;

  if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN)
-   return 0;
+   return -EINVAL;

  if (!dc->cap_funcs.get_dcc_compression_cap)
  return -EINVAL;
--
2.27.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Calexander.deucher%40amd.com%7Cfb9f2581393f494acd1708d8a6e905fc%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C63744286704415%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZYz1FjTl6SoWX1B91t0McdUai%2FzRF9C8uBmE%2BNQNod4%3D&reserved=0 
<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Calexander.deucher%40amd.com%7Cfb9f2581393f494acd1708d8a6e905fc%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C63744286704415%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZYz1FjTl6SoWX1B91t0McdUai%2FzRF9C8uBmE%2BNQNod4%3D&reserved=0>


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] drm/amd/display: Enable fp16 also on DCE-8/10/11.

2021-01-04 Thread Kazlauskas, Nicholas

On 2020-12-28 1:50 p.m., Mario Kleiner wrote:

The hw supports fp16, this is not only useful for HDR,
but also for standard dynamic range displays, because
it allows to get more precise color reproduction with
about 11 - 12 bpc linear precision in the unorm range
0.0 - 1.0.

Working fp16 scanout+display (and HDR over HDMI) was
verified on a DCE-8 asic, so i assume that the more
recent DCE-10/11 will work equally well, now that
format-specific plane scaling constraints are properly
enforced, e.g., the inability of fp16 to scale on older
hw like DCE-8 to DCE-11.

Signed-off-by: Mario Kleiner 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c | 2 +-
  drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c | 2 +-
  drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c   | 2 +-
  3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
index 8ab9d6c79808..f20ed05a5050 100644
--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
@@ -385,7 +385,7 @@ static const struct dc_plane_cap plane_cap = {
.pixel_format_support = {
.argb = true,
.nv12 = false,
-   .fp16 = false
+   .fp16 = true
},
  
  	.max_upscale_factor = {

diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
index 3f63822b8e28..af208f9bd03b 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
@@ -410,7 +410,7 @@ static const struct dc_plane_cap plane_cap = {
.pixel_format_support = {
.argb = true,
.nv12 = false,
-   .fp16 = false
+   .fp16 = true
},
  
  		.max_upscale_factor = {

diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
index 390a0fa37239..26fe25caa281 100644
--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
@@ -402,7 +402,7 @@ static const struct dc_plane_cap plane_cap = {
.pixel_format_support = {
.argb = true,
.nv12 = false,
-   .fp16 = false
+   .fp16 = true
},
  
  	.max_upscale_factor = {




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amd/display: Check plane scaling against format specific hw plane caps.

2021-01-04 Thread Kazlauskas, Nicholas

On 2020-12-28 1:50 p.m., Mario Kleiner wrote:

This takes hw constraints specific to pixel formats into account,
e.g., the inability of older hw to scale fp16 format framebuffers.

It should now allow safely to enable fp16 formats also on DCE-8,
DCE-10, DCE-11.0

Signed-off-by: Mario Kleiner 


Reviewed-by: Nicholas Kazlauskas 

I think we're fine with equating all the planes as equal since we don't 
expose underlay support on DCE.


Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 81 +--
  1 file changed, 73 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 2c4dbdeec46a..a3745cd8a459 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3759,10 +3759,53 @@ static const struct drm_encoder_funcs 
amdgpu_dm_encoder_funcs = {
  };
  
  
+static void get_min_max_dc_plane_scaling(struct drm_device *dev,

+struct drm_framebuffer *fb,
+int *min_downscale, int *max_upscale)
+{
+   struct amdgpu_device *adev = drm_to_adev(dev);
+   struct dc *dc = adev->dm.dc;
+   /* Caps for all supported planes are the same on DCE and DCN 1 - 3 */
+   struct dc_plane_cap *plane_cap = &dc->caps.planes[0];
+
+   switch (fb->format->format) {
+   case DRM_FORMAT_P010:
+   case DRM_FORMAT_NV12:
+   case DRM_FORMAT_NV21:
+   *max_upscale = plane_cap->max_upscale_factor.nv12;
+   *min_downscale = plane_cap->max_downscale_factor.nv12;
+   break;
+
+   case DRM_FORMAT_XRGB16161616F:
+   case DRM_FORMAT_ARGB16161616F:
+   case DRM_FORMAT_XBGR16161616F:
+   case DRM_FORMAT_ABGR16161616F:
+   *max_upscale = plane_cap->max_upscale_factor.fp16;
+   *min_downscale = plane_cap->max_downscale_factor.fp16;
+   break;
+
+   default:
+   *max_upscale = plane_cap->max_upscale_factor.argb;
+   *min_downscale = plane_cap->max_downscale_factor.argb;
+   break;
+   }
+
+   /*
+* A factor of 1 in the plane_cap means to not allow scaling, ie. use a
+* scaling factor of 1.0 == 1000 units.
+*/
+   if (*max_upscale == 1)
+   *max_upscale = 1000;
+
+   if (*min_downscale == 1)
+   *min_downscale = 1000;
+}
+
+
  static int fill_dc_scaling_info(const struct drm_plane_state *state,
struct dc_scaling_info *scaling_info)
  {
-   int scale_w, scale_h;
+   int scale_w, scale_h, min_downscale, max_upscale;
  
  	memset(scaling_info, 0, sizeof(*scaling_info));
  
@@ -3794,17 +3837,25 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state,

/* DRM doesn't specify clipping on destination output. */
scaling_info->clip_rect = scaling_info->dst_rect;
  
-	/* TODO: Validate scaling per-format with DC plane caps */

+   /* Validate scaling per-format with DC plane caps */
+   if (state->plane && state->plane->dev && state->fb) {
+   get_min_max_dc_plane_scaling(state->plane->dev, state->fb,
+&min_downscale, &max_upscale);
+   } else {
+   min_downscale = 250;
+   max_upscale = 16000;
+   }
+
scale_w = scaling_info->dst_rect.width * 1000 /
  scaling_info->src_rect.width;
  
-	if (scale_w < 250 || scale_w > 16000)

+   if (scale_w < min_downscale || scale_w > max_upscale)
return -EINVAL;
  
  	scale_h = scaling_info->dst_rect.height * 1000 /

  scaling_info->src_rect.height;
  
-	if (scale_h < 250 || scale_h > 16000)

+   if (scale_h < min_downscale || scale_h > max_upscale)
return -EINVAL;
  
  	/*

@@ -6424,12 +6475,26 @@ static void dm_plane_helper_cleanup_fb(struct drm_plane 
*plane,
  static int dm_plane_helper_check_state(struct drm_plane_state *state,
   struct drm_crtc_state *new_crtc_state)
  {
-   int max_downscale = 0;
-   int max_upscale = INT_MAX;
+   struct drm_framebuffer *fb = state->fb;
+   int min_downscale, max_upscale;
+   int min_scale = 0;
+   int max_scale = INT_MAX;
+
+   /* Plane enabled? Get min/max allowed scaling factors from plane caps. 
*/
+   if (fb && state->crtc) {
+   get_min_max_dc_plane_scaling(state->crtc->dev, fb,
+&min_downscale, &max_upscale);
+   /*
+* Convert to drm convention: 16.16 fixed point, instead of dc's
+* 1.0 == 1000. Also drm scaling is src/dst instead of dc's
+* dst/src, so min_scale = 1.0 / max_upscale, etc.
+*/
+   min_scale = (1000 << 16) / m

Re: [PATCH 3/3] drm/amd/display: Skip modeset for front porch change

2021-01-04 Thread Kazlauskas, Nicholas

On 2020-12-09 9:45 p.m., Aurabindo Pillai wrote:

[Why&How]
Inorder to enable freesync video mode, driver adds extra
modes based on preferred modes for common freesync frame rates.
When commiting these mode changes, a full modeset is not needed.
If the change in only in the front porch timing value, skip full
modeset and continue using the same stream.

Signed-off-by: Aurabindo Pillai 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 169 --
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |   1 +
  2 files changed, 153 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f699a3d41cad..c8c72887906a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -217,6 +217,9 @@ static bool amdgpu_dm_psr_disable_all(struct 
amdgpu_display_manager *dm);
  static const struct drm_format_info *
  amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
  
+static bool

+is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+struct drm_crtc_state *new_crtc_state);
  /*
   * dm_vblank_get_counter
   *
@@ -5096,8 +5099,11 @@ copy_crtc_timing_for_drm_display_mode(const struct 
drm_display_mode *src_mode,
  static void
  decide_crtc_timing_for_drm_display_mode(struct drm_display_mode *drm_mode,
const struct drm_display_mode 
*native_mode,
-   bool scale_enabled)
+   bool scale_enabled, bool fs_mode)
  {
+   if (fs_mode)
+   return;
+
if (scale_enabled) {
copy_crtc_timing_for_drm_display_mode(native_mode, drm_mode);
} else if (native_mode->clock == drm_mode->clock &&
@@ -5241,6 +5247,24 @@ get_highest_freesync_mode(struct amdgpu_dm_connector 
*aconnector,
return m_high;
  }
  
+static bool is_freesync_video_mode(struct drm_display_mode *mode,

+  struct amdgpu_dm_connector *aconnector)
+{
+   struct drm_display_mode *high_mode;
+
+   high_mode = get_highest_freesync_mode(aconnector, false);
+   if (!high_mode)
+   return false;
+
+   if (high_mode->clock == 0 ||
+   high_mode->hdisplay != mode->hdisplay ||
+   high_mode->clock != mode->clock ||
+   !mode)
+   return false;
+   else
+   return true;
+}
+


Need to check that the other parameters are the same:
- hsync_start
- hsync_end
- htotal
- hskew
- vdisplay
- vscan

etc.


  static struct dc_stream_state *
  create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
   const struct drm_display_mode *drm_mode,
@@ -5253,17 +5277,21 @@ create_stream_for_sink(struct amdgpu_dm_connector 
*aconnector,
const struct drm_connector_state *con_state =
dm_state ? &dm_state->base : NULL;
struct dc_stream_state *stream = NULL;
-   struct drm_display_mode mode = *drm_mode;
+   struct drm_display_mode saved_mode, mode = *drm_mode;
+   struct drm_display_mode *freesync_mode = NULL;
bool native_mode_found = false;
bool scale = dm_state ? (dm_state->scaling != RMX_OFF) : false;
int mode_refresh;
int preferred_refresh = 0;
+   bool is_fs_vid_mode = 0;
  #if defined(CONFIG_DRM_AMD_DC_DCN)
struct dsc_dec_dpcd_caps dsc_caps;
  #endif
uint32_t link_bandwidth_kbps;
-
struct dc_sink *sink = NULL;
+
+   memset(&saved_mode, 0, sizeof(struct drm_display_mode));
+
if (aconnector == NULL) {
DRM_ERROR("aconnector is NULL!\n");
return stream;
@@ -5316,20 +5344,33 @@ create_stream_for_sink(struct amdgpu_dm_connector 
*aconnector,
 */
DRM_DEBUG_DRIVER("No preferred mode found\n");
} else {
+   is_fs_vid_mode = is_freesync_video_mode(&mode, aconnector);
+   if (is_fs_vid_mode) {
+   freesync_mode = get_highest_freesync_mode(aconnector, 
false);
+   if (freesync_mode) {
+   saved_mode = mode;
+   mode = *freesync_mode;
+   }
+   }
+
decide_crtc_timing_for_drm_display_mode(
&mode, preferred_mode,
-   dm_state ? (dm_state->scaling != RMX_OFF) : 
false);
+   dm_state ? (dm_state->scaling != RMX_OFF) : 
false,
+   freesync_mode ? true : false);


I don't think we need an additional flag here - scaling/freesync behave 
the same, maybe just rename the variable in the function.


Regards,
Nicholas Kazlauskas


preferred_refresh = drm_mode_vrefresh(preferred_mode);
}
  
  	if (!dm_state)


Re: [PATCH v2 2/3] drm/amd/display: Add freesync video modes based on preferred modes

2021-01-04 Thread Kazlauskas, Nicholas

On 2020-12-11 12:54 a.m., Shashank Sharma wrote:


On 11/12/20 12:18 am, Aurabindo Pillai wrote:

[Why&How]
If experimental freesync video mode module parameter is enabled, add
few extra display modes into the driver's mode list corresponding to common
video frame rates. When userspace sets these modes, no modeset will be
performed (if current mode was one of freesync modes or the base freesync mode
based off which timings have been generated for the rest of the freesync modes)
since these modes only differ from the base mode with front porch timing.

Signed-off-by: Aurabindo Pillai 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 167 ++
  1 file changed, 167 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index fbff8d693e03..d15453b400d2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5178,6 +5178,54 @@ static void dm_enable_per_frame_crtc_master_sync(struct 
dc_state *context)
set_master_stream(context->streams, context->stream_count);
  }
  
+static struct drm_display_mode *

+get_highest_refresh_rate_mode(struct amdgpu_dm_connector *aconnector,
+ bool use_probed_modes)
+{
+   struct drm_display_mode *m, *m_high = NULL;

I would prefer m_high to be renamed as m_pref, indicating it's the preferred 
mode

+   u16 current_refresh, highest_refresh;
+   struct list_head *list_head = use_probed_modes ?
+   
&aconnector->base.probed_modes :
+   &aconnector->base.modes;
+   /* Find the preferred mode */
+   list_for_each_entry (m, list_head, head) {
+   if (!(m->type & DRM_MODE_TYPE_PREFERRED))
+   continue;
+
+   m_high = m;
+   highest_refresh = drm_mode_vrefresh(m_high);
+   break;
+   }
+
+   if (!m_high) {
+   /* Probably an EDID with no preferred mode. Fallback to first 
entry */
+   m_high = list_first_entry_or_null(&aconnector->base.modes,
+ struct drm_display_mode, 
head);
+   if (!m_high)
+   return NULL;
+   else
+   highest_refresh = drm_mode_vrefresh(m_high);
+   }
+


Optional cleanup suggested below makes code more readable:


/* Find the preferred mode */

list_for_each_entry (m, list_head, head) {
     if (m->type & DRM_MODE_TYPE_PREFERRED) {
         m_pref = m;
         break;
     }
}

if (!m_pref) {
     /* Probably an EDID with no preferred mode. Fallback to first entry */
     m_pref = list_first_entry_or_null(&aconnector->base.modes,
                                       struct drm_display_mode, head);
     if (!m_pref) {
         DRM_DEBUG_DRIVER("No preferred mode found in EDID\n");
         return NULL;
     }
}

highest_refresh = drm_mode_vrefresh(m_pref);


Agreed with this cleanup - naming is confusing as is.


+   /*
+* Find the mode with highest refresh rate with same resolution.
+* For some monitors, preferred mode is not the mode with highest
+* supported refresh rate.
+*/
+   list_for_each_entry (m, list_head, head) {
+   current_refresh  = drm_mode_vrefresh(m);
+
+   if (m->hdisplay == m_high->hdisplay &&
+   m->vdisplay == m_high->vdisplay &&
+   highest_refresh < current_refresh) {
+   highest_refresh = current_refresh;
+   m_high = m;
+   }
+   }
+
+   return m_high;
+}
+
  static struct dc_stream_state *
  create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
   const struct drm_display_mode *drm_mode,
@@ -7006,6 +7054,124 @@ static void amdgpu_dm_connector_ddc_get_modes(struct 
drm_connector *connector,
}
  }
  
+static bool is_duplicate_mode(struct amdgpu_dm_connector *aconnector,

+ struct drm_display_mode *mode)
+{
+   struct drm_display_mode *m;
+
+   list_for_each_entry (m, &aconnector->base.probed_modes, head) {
+   if (drm_mode_equal(m, mode))
+   return true;
+   }
+
+   return false;
+}
+
+static uint add_fs_modes(struct amdgpu_dm_connector *aconnector,
+struct detailed_data_monitor_range *range)
+{
+   const struct drm_display_mode *m, *m_save;
+   struct drm_display_mode *new_mode;
+   uint i;
+   uint64_t target_vtotal, target_vtotal_diff;
+   uint32_t new_modes_count = 0;
+   uint64_t num, den;

num, den, target_vtotal, target_vtotal_diff can go inside the 
list_for_each_entry() loop;

+
+   /* Standard FPS values
+*
+* 23.976 - TV/NTSC
+* 24 - Cinema
+* 25 - TV/PAL
+ 

Re: [PATCH 1/3] drm/amd/display: Add module parameter for freesync video mode

2021-01-04 Thread Kazlauskas, Nicholas

On 2020-12-09 9:45 p.m., Aurabindo Pillai wrote:

[Why&How]
Adds a module parameter to enable experimental freesync video mode modeset
optimization. Enabling this mode allows the driver to skip a full modeset when
freesync compatible modes are requested by the userspace. This paramters also
adds some standard modes based on the connected monitor's VRR range.

Signed-off-by: Aurabindo Pillai 


Reviewed-by: Nicholas Kazlauskas 


---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h |  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 12 
  2 files changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 83ac06a3ec05..efbfee93c359 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -177,6 +177,7 @@ extern int amdgpu_gpu_recovery;
  extern int amdgpu_emu_mode;
  extern uint amdgpu_smu_memory_pool_size;
  extern uint amdgpu_dc_feature_mask;
+extern uint amdgpu_exp_freesync_vid_mode;
  extern uint amdgpu_dc_debug_mask;
  extern uint amdgpu_dm_abm_level;
  extern struct amdgpu_mgpu_info mgpu_info;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index b2a1dd7581bf..ece51ecd53d1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -158,6 +158,7 @@ int amdgpu_mes;
  int amdgpu_noretry = -1;
  int amdgpu_force_asic_type = -1;
  int amdgpu_tmz;
+uint amdgpu_exp_freesync_vid_mode;
  int amdgpu_reset_method = -1; /* auto */
  int amdgpu_num_kcq = -1;
  
@@ -786,6 +787,17 @@ module_param_named(abmlevel, amdgpu_dm_abm_level, uint, 0444);

  MODULE_PARM_DESC(tmz, "Enable TMZ feature (-1 = auto, 0 = off (default), 1 = 
on)");
  module_param_named(tmz, amdgpu_tmz, int, 0444);
  
+/**

+ * DOC: experimental_freesync_video (uint)
+ * Enabled the optimization to adjust front porch timing to achieve seamless 
mode change experience
+ * when setting a freesync supported mode for which full modeset is not needed.
+ * The default value: 0 (off).
+ */
+MODULE_PARM_DESC(
+   experimental_freesync_video,
+   "Enable freesync modesetting optimization feature (0 = off (default), 1 = 
on)");
+module_param_named(experimental_freesync_video, amdgpu_exp_freesync_vid_mode, 
uint, 0444);
+
  /**
   * DOC: reset_method (int)
   * GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 
= mode2, 4 = baco)



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: Do not change amdgpu framebuffer format during page flip

2020-12-22 Thread Kazlauskas, Nicholas

On 2020-12-21 10:18 p.m., Zhan Liu wrote:

[Why]
Driver cannot change amdgpu framebuffer (afb) format while doing
page flip. Force system doing so will cause ioctl error, and result in
breaking several functionalities including FreeSync.

If afb format is forced to change during page flip, following message
will appear in dmesg.log:

"[drm:drm_mode_page_flip_ioctl [drm]]
Page flip is not allowed to change frame buffer format."

[How]
Do not change afb format while doing page flip. It is okay to check
whether afb format is valid here, however, forcing afb format change
shouldn't happen here.

Signed-off-by: Zhan Liu 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 2 --
  1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index a638709e9c92..0efebd592b65 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -831,8 +831,6 @@ static int convert_tiling_flags_to_modifier(struct 
amdgpu_framebuffer *afb)
modifier);
if (!format_info)
return -EINVAL;
-
-   afb->base.format = format_info;


Adding Bas for comment since he added the modifiers conversion, but I'll 
leave my own thoughts below.


This patch is a NAK from me - the framebuffer is still expected to be in 
a specific format/tiling configuration and ignoring the incoming format 
doesn't resolve the problem.


The problem is that the legacy page IOCTL has this check in the first 
place expecting that no driver is capable of performing this action.


This is not the case for amdgpu (be it DC enabled or not), so I think 
it's best to have a driver cap here or some new driver hook to validate 
that the flip is valid.


This is legacy code, and in the shared path, so I don't know how others 
in the list feel about modifying this but I think we do expect that 
legacy userspace can do this from the X side of things.


I recall seeing this happen going from DCC disabled to non DCC enabled 
buffers and some of this functionality being behind a version check in 
xf86-video-amdgpu.


Regards,
Nicholas Kazlauskas


}
}
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix memory leaks in S3 resume

2020-12-21 Thread Kazlauskas, Nicholas

Reviewed-by: Nicholas Kazlauskas 

Feel free to merge.

Regards,
Nicholas Kazlauskas

On 2020-12-21 11:45 a.m., Deucher, Alexander wrote:

[AMD Official Use Only - Internal Distribution Only]


This leak is still present.  Can we get this applied?

Acked-by: Alex Deucher 

*From:* Wang, Chao-kai (Stylon) 
*Sent:* Tuesday, November 10, 2020 2:49 AM
*To:* amd-gfx@lists.freedesktop.org 
*Cc:* Wang, Chao-kai (Stylon) ; Kazlauskas, 
Nicholas ; Deucher, Alexander 
; Wentland, Harry 

*Subject:* [PATCH] drm/amd/display: Fix memory leaks in S3 resume
EDID parsing in S3 resume pushes new display modes
to probed_modes list but doesn't consolidate to actual
mode list. This creates a race condition when
amdgpu_dm_connector_ddc_get_modes() re-initializes the
list head without walking the list and results in  memory leak.

Signed-off-by: Stylon Wang 
---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c

index 0b6adf23d316..715e0bd489f8 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -2337,7 +2337,8 @@ void amdgpu_dm_update_connector_after_detect(

  drm_connector_update_edid_property(connector,
 
aconnector->edid);

-   drm_add_edid_modes(connector, aconnector->edid);
+   aconnector->num_modes = 
drm_add_edid_modes(connector, aconnector->edid);

+   drm_connector_list_update(connector);

  if (aconnector->dc_link->aux_mode)
  
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,

--
2.25.1



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 3/3] drm/amd/display: Skip modeset for front porch change

2020-12-11 Thread Kazlauskas, Nicholas

On 2020-12-11 10:35 a.m., Shashank Sharma wrote:


On 11/12/20 8:19 pm, Kazlauskas, Nicholas wrote:

On 2020-12-11 12:08 a.m., Shashank Sharma wrote:

On 10/12/20 11:20 pm, Aurabindo Pillai wrote:

On Thu, 2020-12-10 at 18:29 +0530, Shashank Sharma wrote:

On 10/12/20 8:15 am, Aurabindo Pillai wrote:

[Why&How]
Inorder to enable freesync video mode, driver adds extra
modes based on preferred modes for common freesync frame rates.
When commiting these mode changes, a full modeset is not needed.
If the change in only in the front porch timing value, skip full
modeset and continue using the same stream.

Signed-off-by: Aurabindo Pillai <
aurabindo.pil...@amd.com
---
   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 169
--
   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |   1 +
   2 files changed, 153 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f699a3d41cad..c8c72887906a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -217,6 +217,9 @@ static bool amdgpu_dm_psr_disable_all(struct
amdgpu_display_manager *dm);
   static const struct drm_format_info *
   amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
   
+static bool

+is_timing_unchanged_for_freesync(struct drm_crtc_state
*old_crtc_state,
+struct drm_crtc_state
*new_crtc_state);
   /*
* dm_vblank_get_counter
*
@@ -5096,8 +5099,11 @@ copy_crtc_timing_for_drm_display_mode(const
struct drm_display_mode *src_mode,
   static void
   decide_crtc_timing_for_drm_display_mode(struct drm_display_mode
*drm_mode,
const struct drm_display_mode
*native_mode,
-   bool scale_enabled)
+   bool scale_enabled, bool
fs_mode)
   {
+   if (fs_mode)
+   return;

so we are adding an input flag just so that we can return from the
function at top ? How about adding this check at the caller without
changing the function parameters ?

Will fix this.


+
if (scale_enabled) {
copy_crtc_timing_for_drm_display_mode(native_mode,
drm_mode);
} else if (native_mode->clock == drm_mode->clock &&
@@ -5241,6 +5247,24 @@ get_highest_freesync_mode(struct
amdgpu_dm_connector *aconnector,
return m_high;
   }
   
+static bool is_freesync_video_mode(struct drm_display_mode *mode,

+  struct amdgpu_dm_connector
*aconnector)
+{
+   struct drm_display_mode *high_mode;
+

I thought we were adding a string "_FSV" in the end for the mode-

name, why can't we check that instead of going through the whole

list of modes again ?

Actually I only added _FSV to distinguish the newly added modes easily.
On second thoughts, I'm not sure if there are any userspace
applications that might depend on parsing the mode name, for maybe to
print the resolution. I think its better not to break any such
assumptions if they do exist by any chance. I think I'll just remove
_FSV from the mode name. We already set DRM_MODE_TYPE_DRIVER for
userspace to recognize these additional modes, so it shouldnt be a
problem.

Actually, I am rather happy with this, as in when we want to test out this 
feature with a IGT type stuff, or if a userspace wants to utilize this option 
in any way, this method of differentiation would be useful. DRM_MODE_DRIVER is 
being used by some other places apart from freesync, so it might not be a 
unique identifier. So my recommendation would be to keep this.

My comment was, if we have already parsed the whole connector list once, and added the 
mode, there should be a better way of doing it instead of checking it again by calling 
"get_highest_freesync_mod"

Some things I can think on top of my mind would be:

- Add a read-only amdgpu driver private flag (not DRM flag), while adding a new 
freesync mode, which will uniquely identify if a mode is FS mode. On modeset, 
you have to just check that flag.

- As we are not handling a lot of modes, cache the FS modes locally and check 
only from that DB (instead of the whole modelist)

- Cache the VIC of the mode (if available) and then look into the VIC table 
(not sure if detailed modes provide VIC, like CEA-861 modes)

or something better than this.

- Shashank

I'd rather we not make mode name part of a UAPI or to identify a
"FreeSync mode". This is already behind a module option and from the
driver's perspective we only need the timing to understand whether or
not we can do an optimized modeset using FreeSync into it. Driver
private flags can optimize the check away but it's only a few
comparisons so I don't see much benefit.

The module parameter is just to control the addition of freesync modes or not, 
but that doesn&

Re: [PATCH 3/3] drm/amd/display: Skip modeset for front porch change

2020-12-11 Thread Kazlauskas, Nicholas

On 2020-12-11 12:08 a.m., Shashank Sharma wrote:


On 10/12/20 11:20 pm, Aurabindo Pillai wrote:

On Thu, 2020-12-10 at 18:29 +0530, Shashank Sharma wrote:

On 10/12/20 8:15 am, Aurabindo Pillai wrote:

[Why&How]
Inorder to enable freesync video mode, driver adds extra
modes based on preferred modes for common freesync frame rates.
When commiting these mode changes, a full modeset is not needed.
If the change in only in the front porch timing value, skip full
modeset and continue using the same stream.

Signed-off-by: Aurabindo Pillai <
aurabindo.pil...@amd.com
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 169
--
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |   1 +
  2 files changed, 153 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f699a3d41cad..c8c72887906a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -217,6 +217,9 @@ static bool amdgpu_dm_psr_disable_all(struct
amdgpu_display_manager *dm);
  static const struct drm_format_info *
  amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd);
  
+static bool

+is_timing_unchanged_for_freesync(struct drm_crtc_state
*old_crtc_state,
+struct drm_crtc_state
*new_crtc_state);
  /*
   * dm_vblank_get_counter
   *
@@ -5096,8 +5099,11 @@ copy_crtc_timing_for_drm_display_mode(const
struct drm_display_mode *src_mode,
  static void
  decide_crtc_timing_for_drm_display_mode(struct drm_display_mode
*drm_mode,
const struct drm_display_mode
*native_mode,
-   bool scale_enabled)
+   bool scale_enabled, bool
fs_mode)
  {
+   if (fs_mode)
+   return;

so we are adding an input flag just so that we can return from the
function at top ? How about adding this check at the caller without
changing the function parameters ?

Will fix this.


+
if (scale_enabled) {
copy_crtc_timing_for_drm_display_mode(native_mode,
drm_mode);
} else if (native_mode->clock == drm_mode->clock &&
@@ -5241,6 +5247,24 @@ get_highest_freesync_mode(struct
amdgpu_dm_connector *aconnector,
return m_high;
  }
  
+static bool is_freesync_video_mode(struct drm_display_mode *mode,

+  struct amdgpu_dm_connector
*aconnector)
+{
+   struct drm_display_mode *high_mode;
+

I thought we were adding a string "_FSV" in the end for the mode-

name, why can't we check that instead of going through the whole

list of modes again ?

Actually I only added _FSV to distinguish the newly added modes easily.
On second thoughts, I'm not sure if there are any userspace
applications that might depend on parsing the mode name, for maybe to
print the resolution. I think its better not to break any such
assumptions if they do exist by any chance. I think I'll just remove
_FSV from the mode name. We already set DRM_MODE_TYPE_DRIVER for
userspace to recognize these additional modes, so it shouldnt be a
problem.


Actually, I am rather happy with this, as in when we want to test out this 
feature with a IGT type stuff, or if a userspace wants to utilize this option 
in any way, this method of differentiation would be useful. DRM_MODE_DRIVER is 
being used by some other places apart from freesync, so it might not be a 
unique identifier. So my recommendation would be to keep this.

My comment was, if we have already parsed the whole connector list once, and added the 
mode, there should be a better way of doing it instead of checking it again by calling 
"get_highest_freesync_mod"

Some things I can think on top of my mind would be:

- Add a read-only amdgpu driver private flag (not DRM flag), while adding a new 
freesync mode, which will uniquely identify if a mode is FS mode. On modeset, 
you have to just check that flag.

- As we are not handling a lot of modes, cache the FS modes locally and check 
only from that DB (instead of the whole modelist)

- Cache the VIC of the mode (if available) and then look into the VIC table 
(not sure if detailed modes provide VIC, like CEA-861 modes)

or something better than this.

- Shashank


I'd rather we not make mode name part of a UAPI or to identify a 
"FreeSync mode". This is already behind a module option and from the 
driver's perspective we only need the timing to understand whether or 
not we can do an optimized modeset using FreeSync into it. Driver 
private flags can optimize the check away but it's only a few 
comparisons so I don't see much benefit.


We will always need to reference the original preferred mode regardless 
of how the FreeSync mode is identified since there could be a case where 
we're enabling the CRTC from disabled -> enabled. The display was 
previously blank and we need to reprogram the OTG timing to the mode 
that doesn't ha

Re: [PATCH 2/2] drm/amd/display: add S/G support for Vangogh

2020-12-07 Thread Kazlauskas, Nicholas

On 2020-12-07 3:03 p.m., roman...@amd.com wrote:

From: Roman Li 

[Why]
Scatter/gather feature is supported on Vangogh.

[How]
Add GTT domain support for Vangogh to enable
display buffers in system memory.

Signed-off-by: Roman Li 


Series is:

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 63401bc8f37b..a638709e9c92 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -526,6 +526,7 @@ uint32_t amdgpu_display_supported_domains(struct 
amdgpu_device *adev,
domain |= AMDGPU_GEM_DOMAIN_GTT;
break;
case CHIP_RENOIR:
+   case CHIP_VANGOGH:
domain |= AMDGPU_GEM_DOMAIN_GTT;
break;
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/disply: set num_crtc earlier

2020-12-04 Thread Kazlauskas, Nicholas

On 2020-12-04 9:30 a.m., Alex Deucher wrote:

To avoid a recently added warning:
  Bogus possible_crtcs: [ENCODER:65:TMDS-65] possible_crtcs=0xf (full crtc 
mask=0x7)
  WARNING: CPU: 3 PID: 439 at drivers/gpu/drm/drm_mode_config.c:617 
drm_mode_config_validate+0x178/0x200 [drm]
In this case the warning is harmless, but confusing to users.

Bug: https://bugzilla.kernel.org/show_bug.cgi?id=209123
Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 9 -
  1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 313501cc39fc..1ec57bc798e2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1130,9 +1130,6 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error;
}
  
-	/* Update the actual used number of crtc */

-   adev->mode_info.num_crtc = adev->dm.display_indexes_num;
-
/* create fake encoders for MST */
dm_dp_create_fake_mst_encoders(adev);
  
@@ -3364,6 +3361,10 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)

enum dc_connection_type new_connection_type = dc_connection_none;
const struct dc_plane_cap *plane;
  
+	dm->display_indexes_num = dm->dc->caps.max_streams;

+   /* Update the actual used number of crtc */
+   adev->mode_info.num_crtc = adev->dm.display_indexes_num;
+
link_cnt = dm->dc->caps.max_links;
if (amdgpu_dm_mode_config_init(dm->adev)) {
DRM_ERROR("DM: Failed to initialize mode config\n");
@@ -3425,8 +3426,6 @@ static int amdgpu_dm_initialize_drm_device(struct 
amdgpu_device *adev)
goto fail;
}
  
-	dm->display_indexes_num = dm->dc->caps.max_streams;

-
/* loops over all connectors on the board */
for (i = 0; i < link_cnt; i++) {
struct dc_link *link = NULL;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] drm/amd/display: check cursor FB is linear

2020-12-04 Thread Kazlauskas, Nicholas

On 2020-12-03 3:19 p.m., Simon Ser wrote:

Previously we accepted non-linear buffers for the cursor plane. This
results in bad output, DC validation failures and oops.

Make sure the FB uses a linear layout in the atomic check function.

The GFX8- check is inspired from ac_surface_set_bo_metadata in Mesa.
The GFX9+ check comes from convert_tiling_flags_to_modifier.

Signed-off-by: Simon Ser 
References: https://gitlab.freedesktop.org/drm/amd/-/issues/1390
Cc: Bas Nieuwenhuizen 
Cc: Alex Deucher 
Cc: Harry Wentland 
Cc: Nicholas Kazlauskas 
---


Looks good to me, series is:

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 +++
  1 file changed, 19 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 070bb55ec4e1..b46b188588b4 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -8978,7 +8978,10 @@ static int dm_check_cursor_fb(struct amdgpu_crtc 
*new_acrtc,
  struct drm_plane_state *new_plane_state,
  struct drm_framebuffer *fb)
  {
+   struct amdgpu_device *adev = drm_to_adev(new_acrtc->base.dev);
+   struct amdgpu_framebuffer *afb = to_amdgpu_framebuffer(fb);
unsigned int pitch;
+   bool linear;
  
  	if (fb->width > new_acrtc->max_cursor_width ||

fb->height > new_acrtc->max_cursor_height) {
@@ -9013,6 +9016,22 @@ static int dm_check_cursor_fb(struct amdgpu_crtc 
*new_acrtc,
return -EINVAL;
}
  
+	/* Core DRM takes care of checking FB modifiers, so we only need to

+* check tiling flags when the FB doesn't have a modifier. */
+   if (!(fb->flags & DRM_MODE_FB_MODIFIERS)) {
+   if (adev->family < AMDGPU_FAMILY_AI) {
+   linear = AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != 
DC_ARRAY_2D_TILED_THIN1 &&
+AMDGPU_TILING_GET(afb->tiling_flags, ARRAY_MODE) != 
DC_ARRAY_1D_TILED_THIN1 &&
+AMDGPU_TILING_GET(afb->tiling_flags, 
MICRO_TILE_MODE) == 0;
+   } else {
+   linear = AMDGPU_TILING_GET(afb->tiling_flags, 
SWIZZLE_MODE) == 0;
+   }
+   if (!linear) {
+   DRM_DEBUG_ATOMIC("Cursor FB not linear");
+   return -EINVAL;
+   }
+   }
+
return 0;
  }
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] drm/amd/display: add cursor pitch check

2020-12-02 Thread Kazlauskas, Nicholas

On 2020-12-02 4:09 p.m., Simon Ser wrote:

Replace the width check with a pitch check, which matches DM internals.
Add a new check to make sure the pitch (in pixels) matches the width.

Signed-off-by: Simon Ser 
Cc: Alex Deucher 
Cc: Harry Wentland 
Cc: Nicholas Kazlauskas 


Series is:

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 +++
  1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 9e328101187e..862a59703060 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -8988,6 +8988,7 @@ static int dm_update_plane_state(struct dc *dc,
struct amdgpu_crtc *new_acrtc;
bool needs_reset;
int ret = 0;
+   unsigned int pitch;
  
  
  	new_plane_crtc = new_plane_state->crtc;

@@ -9021,15 +9022,25 @@ static int dm_update_plane_state(struct dc *dc,
return -EINVAL;
}
  
-			switch (new_plane_state->fb->width) {

+   /* Pitch in pixels */
+   pitch = new_plane_state->fb->pitches[0] / 
new_plane_state->fb->format->cpp[0];
+
+   if (new_plane_state->fb->width != pitch) {
+   DRM_DEBUG_ATOMIC("Cursor FB width %d doesn't match 
pitch %d",
+new_plane_state->fb->width,
+pitch);
+   return -EINVAL;
+   }
+
+   switch (pitch) {
case 64:
case 128:
case 256:
-   /* FB width is supported by cursor plane */
+   /* FB pitch is supported by cursor plane */
break;
default:
-   DRM_DEBUG_ATOMIC("Bad cursor FB width %d\n",
-new_plane_state->fb->width);
+   DRM_DEBUG_ATOMIC("Bad cursor FB pitch %d px\n",
+pitch);
return -EINVAL;
}
}



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3] drm/amd/display: turn DPMS off on mst connector unplug

2020-11-27 Thread Kazlauskas, Nicholas

On 2020-11-26 7:18 p.m., Aurabindo Pillai wrote:

[Why&How]

Set dpms off on the MST connector that was unplugged, for the side effect of
releasing some references held through deallocation of mst payload.


Applies to non-MST now too, so the description and title should be updated.



Signed-off-by: Aurabindo Pillai 
Signed-off-by: Eryk Brol 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 31 ++-
  drivers/gpu/drm/amd/display/dc/core/dc.c  | 13 
  drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
  3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e213246e3f04..9966679d29e7 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1984,6 +1984,32 @@ static void dm_gpureset_commit_state(struct dc_state 
*dc_state,
return;
  }
  
+static void dm_set_dpms_off(struct dc_link *link)

+{
+   struct dc_stream_state *stream_state;
+   struct amdgpu_dm_connector *aconnector = link->priv;
+   struct amdgpu_device *adev = drm_to_adev(aconnector->base.dev);
+   struct dc_stream_update stream_update = {0};


Please use a memset instead of a zero initializer here. Some compilers 
complain about that.



+   bool dpms_off = true;
+
+   stream_update.dpms_off = &dpms_off;
+
+   mutex_lock(&adev->dm.dc_lock);
+   stream_state = dc_stream_find_from_link(link);
+
+   if (stream_state == NULL) {
+   dm_error("Error finding stream state associated with link!\n");


This shouldn't be using a dm_error print here. a DRM_DEBUG_DRIVER would 
be better suited.


With these three items fixed the v4 of this patch will be:

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


+   mutex_unlock(&adev->dm.dc_lock);
+   return;
+   }
+
+   stream_update.stream = stream_state;
+   dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
+stream_state, &stream_update,
+stream_state->ctx->dc->current_state);
+   mutex_unlock(&adev->dm.dc_lock);
+}
+
  static int dm_resume(void *handle)
  {
struct amdgpu_device *adev = handle;
@@ -2434,8 +2460,11 @@ static void handle_hpd_irq(void *param)
drm_kms_helper_hotplug_event(dev);
  
  	} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {

-   amdgpu_dm_update_connector_after_detect(aconnector);
+   if (new_connection_type == dc_connection_none &&
+   aconnector->dc_link->type == dc_connection_none)
+   dm_set_dpms_off(aconnector->dc_link);
  
+		amdgpu_dm_update_connector_after_detect(aconnector);
  
  		drm_modeset_lock_all(dev);

dm_restore_drm_connector_state(dev, connector);
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 903353389edb..58eb0d69873a 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -2798,6 +2798,19 @@ struct dc_stream_state *dc_get_stream_at_index(struct dc 
*dc, uint8_t i)
return NULL;
  }
  
+struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link)

+{
+   uint8_t i;
+   struct dc_context *ctx = link->ctx;
+
+   for (i = 0; i < ctx->dc->current_state->stream_count; i++) {
+   if (ctx->dc->current_state->streams[i]->link == link)
+   return ctx->dc->current_state->streams[i];
+   }
+
+   return NULL;
+}
+
  enum dc_irq_source dc_interrupt_to_irq_source(
struct dc *dc,
uint32_t src_id,
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h 
b/drivers/gpu/drm/amd/display/dc/dc_stream.h
index bf090afc2f70..b7910976b81a 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
@@ -292,6 +292,7 @@ void dc_stream_log(const struct dc *dc, const struct 
dc_stream_state *stream);
  
  uint8_t dc_get_current_stream_count(struct dc *dc);

  struct dc_stream_state *dc_get_stream_at_index(struct dc *dc, uint8_t i);
+struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link);
  
  /*

   * Return the current frame counter.



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2] drm/amd/display: turn DPMS off on mst connector unplug

2020-11-26 Thread Kazlauskas, Nicholas

On 2020-11-26 4:45 p.m., Aurabindo Pillai wrote:

[Why&How]

Set dpms off on the MST connector that was unplugged, for the side effect of
releasing some references held through deallocation of mst payload.

Signed-off-by: Aurabindo Pillai 
Signed-off-by: Eryk Brol 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 33 ++-
  drivers/gpu/drm/amd/display/dc/core/dc.c  | 17 ++
  drivers/gpu/drm/amd/display/dc/dc_stream.h|  1 +
  3 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e213246e3f04..6203cbf3ee33 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1984,6 +1984,34 @@ static void dm_gpureset_commit_state(struct dc_state 
*dc_state,
return;
  }
  
+static void dm_set_dpms_off(struct dc_link *link)

+{
+   struct dc_stream_state *stream_state;
+   struct amdgpu_dm_connector *aconnector = link->priv;
+   struct amdgpu_device *adev = drm_to_adev(aconnector->base.dev);
+   struct {
+   struct dc_surface_update surface_updates[MAX_SURFACES];


Let's remove the bundle and drop the surface_updates here. A 
surface_count of 0 should be sufficient to guard against NULL 
surface_updates array.



+   struct dc_stream_update stream_update;
+   } bundle = {0};
+   bool dpms_off = true;
+
+   bundle.stream_update.dpms_off = &dpms_off;
+
+   mutex_lock(&adev->dm.dc_lock);
+   stream_state = dc_stream_find_from_link(link);
+   mutex_unlock(&adev->dm.dc_lock);


This needs to be move under dc_commit_updates_for_stream(). It's not 
safe to call dc_commit_updates_for_stream while unlocked since it 
modifies global state.



+
+   if (stream_state == NULL) {
+   dm_error("Error finding stream state associated with link!\n");
+   return;
+   }
+
+   bundle.stream_update.stream = stream_state;
+   dc_commit_updates_for_stream(stream_state->ctx->dc, 
bundle.surface_updates, 0,
+stream_state, &bundle.stream_update,
+stream_state->ctx->dc->current_state);
+}
+
  static int dm_resume(void *handle)
  {
struct amdgpu_device *adev = handle;
@@ -2434,8 +2462,11 @@ static void handle_hpd_irq(void *param)
drm_kms_helper_hotplug_event(dev);
  
  	} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {

-   amdgpu_dm_update_connector_after_detect(aconnector);
+   if (new_connection_type == dc_connection_none &&
+   aconnector->dc_link->type == dc_connection_none)
+   dm_set_dpms_off(aconnector->dc_link);
  
+		amdgpu_dm_update_connector_after_detect(aconnector);
  
  		drm_modeset_lock_all(dev);

dm_restore_drm_connector_state(dev, connector);
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 903353389edb..7a2b2802faa2 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -2798,6 +2798,23 @@ struct dc_stream_state *dc_get_stream_at_index(struct dc 
*dc, uint8_t i)
return NULL;
  }
  
+struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link)

+{
+   uint8_t i;
+   struct dc_context *ctx = link->ctx;
+
+   for (i = 0; i< MAX_PIPES; i++) {
+   if (ctx->dc->current_state->streams[i] == NULL)
+   continue;


Drop the NULL check above and change MAX_PIPES to 
dc->current_state->stream_count.


Regards,
Nicholas Kazlauskas


+
+   if (ctx->dc->current_state->streams[i]->link == link) {
+   return ctx->dc->current_state->streams[i];
+   }
+   }
+
+   return NULL;
+}
+
  enum dc_irq_source dc_interrupt_to_irq_source(
struct dc *dc,
uint32_t src_id,
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h 
b/drivers/gpu/drm/amd/display/dc/dc_stream.h
index bf090afc2f70..b7910976b81a 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
@@ -292,6 +292,7 @@ void dc_stream_log(const struct dc *dc, const struct 
dc_stream_state *stream);
  
  uint8_t dc_get_current_stream_count(struct dc *dc);

  struct dc_stream_state *dc_get_stream_at_index(struct dc *dc, uint8_t i);
+struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link);
  
  /*

   * Return the current frame counter.



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: turn DPMS off on mst connector unplug

2020-11-26 Thread Kazlauskas, Nicholas

On 2020-11-26 2:50 p.m., Aurabindo Pillai wrote:

[Why&How]

Set dpms off on the MST connector that was unplugged, for the side effect of
releasing some references held through deallocation of mst payload.

Signed-off-by: Aurabindo Pillai 
Signed-off-by: Eryk Brol 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 63 ++-
  1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e213246e3f04..fc984cf6e316 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1984,6 +1984,64 @@ static void dm_gpureset_commit_state(struct dc_state 
*dc_state,
return;
  }
  
+static void dm_set_dpms_off(struct dc_link *link)

+{
+   struct {
+   struct dc_surface_update surface_updates[MAX_SURFACES];
+   struct dc_stream_update stream_update;
+   } * bundle;
+   struct dc_stream_state *stream_state;
+   struct dc_context *dc_ctx = link->ctx;
+   int i;
+
+   bundle = kzalloc(sizeof(*bundle), GFP_KERNEL);
+
+   if (!bundle) {
+   dm_error("Failed to allocate update bundle\n");
+   return;
+   }
+
+   bundle->stream_update.dpms_off = kzalloc(sizeof(bool), GFP_KERNEL);


You don't need to allocate memory for the bool. You can just use a local 
here, DC doesn't need it after the call ends.


I think the same should apply to the surface updates as well. I'm not 
entirely sure how much stack the bundle takes up for a single stream 
here but it should be small enough that we can leave it on the stack.



+
+   if (!bundle->stream_update.dpms_off) {
+   dm_error("Failed to allocate update bundle\n");
+   goto cleanup;
+   }
+
+   *bundle->stream_update.dpms_off = true;
+
+   for (i = 0; i< MAX_PIPES; i++) {
+   if (dc_ctx->dc->current_state->streams[i] == NULL)
+   continue;
+
+   if (dc_ctx->dc->current_state->streams[i]->link == link) {
+   stream_state = dc_ctx->dc->current_state->streams[i];
+   goto link_found;
+   }
+   }


We shouldn't be reading from dc->current_state directly in DM, it's 
essentially private state.


I think we should actually have a new helper here in dc_stream.h that's 
like:


struct dc_stream_state *dc_stream_find_from_link(const struct dc_link 
*link);


to replace this block of code.

Note that any time we touch current_state we also need to be locking - 
it looks like this function is missing the appropriate calls to:


mutex_lock(&adev->dm.dc_lock);
mutex_unlock(&adev->dm.dc_lock);

Regards,
Nicholas Kazlauskas



+
+   dm_error("Cannot find link associated with the stream to be 
disabled\n");
+   return;
+
+link_found:
+   if (stream_state == NULL) {
+   dm_error("Stream state associated with the link is NULL\n");
+   return;
+   }
+
+   bundle->stream_update.stream = stream_state;
+
+   dc_commit_updates_for_stream(stream_state->ctx->dc, 
bundle->surface_updates, 0,
+stream_state, &bundle->stream_update,
+stream_state->ctx->dc->current_state);
+
+   kfree(bundle->stream_update.dpms_off);
+cleanup:
+   kfree(bundle);
+
+   return;
+}
+
  static int dm_resume(void *handle)
  {
struct amdgpu_device *adev = handle;
@@ -2434,8 +2492,11 @@ static void handle_hpd_irq(void *param)
drm_kms_helper_hotplug_event(dev);
  
  	} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {

-   amdgpu_dm_update_connector_after_detect(aconnector);
+   if (new_connection_type == dc_connection_none &&
+   aconnector->dc_link->type == dc_connection_none)
+   dm_set_dpms_off(aconnector->dc_link);
  
+		amdgpu_dm_update_connector_after_detect(aconnector);
  
  		drm_modeset_lock_all(dev);

dm_restore_drm_connector_state(dev, connector);



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Clear dc remote sinks on MST disconnect

2020-11-26 Thread Kazlauskas, Nicholas

On 2020-11-26 9:31 a.m., Aurabindo Pillai wrote:

[Why&How]
Recent changes to upstream mst code remove the callback which
cleared the internal state for mst. Move the missing functionality
that was previously called through the destroy call back for mst connector
destroy

Signed-off-by: Aurabindo Pillai 
Signed-off-by: Eryk Brol 
---
  .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 22 +--
  drivers/gpu/drm/amd/display/dc/dm_helpers.h   |  2 +-
  2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
index b7d7ec3ba00d..d8b0f07deaf2 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
@@ -418,9 +418,10 @@ bool dm_helpers_dp_mst_start_top_mgr(
  
  void dm_helpers_dp_mst_stop_top_mgr(

struct dc_context *ctx,
-   const struct dc_link *link)
+   struct dc_link *link)
  {
struct amdgpu_dm_connector *aconnector = link->priv;
+   uint8_t i;
  
  	if (!aconnector) {

DRM_ERROR("Failed to find connector for link!");
@@ -430,8 +431,25 @@ void dm_helpers_dp_mst_stop_top_mgr(
DRM_INFO("DM_MST: stopping TM on aconnector: %p [id: %d]\n",
aconnector, aconnector->base.base.id);
  
-	if (aconnector->mst_mgr.mst_state == true)

+   if (aconnector->mst_mgr.mst_state == true) {
drm_dp_mst_topology_mgr_set_mst(&aconnector->mst_mgr, false);
+
+   for (i = 0; i < MAX_SINKS_PER_LINK; i++) {
+   if (link->remote_sinks[i] == NULL)
+   continue;
+
+   if (link->remote_sinks[i]->sink_signal ==
+   SIGNAL_TYPE_DISPLAY_PORT_MST) {
+   dc_link_remove_remote_sink(link, 
link->remote_sinks[i]);


In general I think this patch looks fine, and you can have the:

Reviewed-by: Nicholas Kazlauskas 

But I think that this loop is redundant - dc_link_remove_remote_sink 
should be removing all the remote sinks. Not sure if remote_sinks can 
start at an index other than 0 though.


Regards,
Nicholas Kazlauskas


+
+   if (aconnector->dc_sink) {
+   dc_sink_release(aconnector->dc_sink);
+   aconnector->dc_sink = NULL;
+   
aconnector->dc_link->cur_link_settings.lane_count = 0;
+   }
+   }
+   }
+   } >   }
  
  bool dm_helpers_dp_read_dpcd(

diff --git a/drivers/gpu/drm/amd/display/dc/dm_helpers.h 
b/drivers/gpu/drm/amd/display/dc/dm_helpers.h
index b2cd8491c707..07e349b1067b 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_helpers.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_helpers.h
@@ -113,7 +113,7 @@ bool dm_helpers_dp_mst_start_top_mgr(
  
  void dm_helpers_dp_mst_stop_top_mgr(

struct dc_context *ctx,
-   const struct dc_link *link);
+   struct dc_link *link);
  /**
   * OS specific aux read callback.
   */



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Update dmub code

2020-11-13 Thread Kazlauskas, Nicholas

On 2020-11-13 3:27 p.m., Bhawanpreet Lakha wrote:

There is a delta in the dmub code
- add boot options
- add boot status
- remove unused auto_load_is_done func pointer

Signed-off-by: Bhawanpreet Lakha 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dmub/dmub_srv.h   | 20 +-
  .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  3 ++-
  .../gpu/drm/amd/display/dmub/src/dmub_dcn20.c | 23 
  .../gpu/drm/amd/display/dmub/src/dmub_dcn20.h |  6 +
  .../gpu/drm/amd/display/dmub/src/dmub_dcn21.c |  5 
  .../gpu/drm/amd/display/dmub/src/dmub_dcn21.h |  2 --
  .../gpu/drm/amd/display/dmub/src/dmub_dcn30.c |  5 
  .../gpu/drm/amd/display/dmub/src/dmub_dcn30.h |  1 -
  .../gpu/drm/amd/display/dmub/src/dmub_srv.c   | 26 ++-
  9 files changed, 70 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h 
b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
index ac41ae2d261b..b82a46890846 100644
--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
@@ -265,8 +265,12 @@ struct dmub_srv_hw_funcs {
bool (*is_hw_init)(struct dmub_srv *dmub);
  
  	bool (*is_phy_init)(struct dmub_srv *dmub);

+   void (*enable_dmub_boot_options)(struct dmub_srv *dmub);
+
+   void (*skip_dmub_panel_power_sequence)(struct dmub_srv *dmub, bool 
skip);
+
+   union dmub_fw_boot_status (*get_fw_status)(struct dmub_srv *dmub);
  
-	bool (*is_auto_load_done)(struct dmub_srv *dmub);
  
  	void (*set_gpint)(struct dmub_srv *dmub,

  union dmub_gpint_data_register reg);
@@ -309,6 +313,7 @@ struct dmub_srv_hw_params {
uint64_t fb_offset;
uint32_t psp_version;
bool load_inst_const;
+   bool skip_panel_power_sequence;
  };
  
  /**

@@ -590,6 +595,19 @@ enum dmub_status dmub_srv_get_gpint_response(struct 
dmub_srv *dmub,
   */
  void dmub_flush_buffer_mem(const struct dmub_fb *fb);
  
+/**

+ * dmub_srv_get_fw_boot_status() - Returns the DMUB boot status bits.
+ *
+ * @dmub: the dmub service
+ * @status: out pointer for firmware status
+ *
+ * Return:
+ *   DMUB_STATUS_OK - success
+ *   DMUB_STATUS_INVALID - unspecified error, unsupported
+ */
+enum dmub_status dmub_srv_get_fw_boot_status(struct dmub_srv *dmub,
+union dmub_fw_boot_status *status);
+
  #if defined(__cplusplus)
  }
  #endif
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h 
b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
index b0d1347d13f0..9fd24f93a216 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
@@ -191,7 +191,8 @@ union dmub_fw_boot_options {
uint32_t optimized_init : 1;
uint32_t skip_phy_access : 1;
uint32_t disable_clk_gate: 1;
-   uint32_t reserved : 27;
+   uint32_t skip_phy_init_panel_sequence: 1;
+   uint32_t reserved : 26;
} bits;
uint32_t all;
  };
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c 
b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
index 2c4a2fe9311d..cafba1d23c6a 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
@@ -312,3 +312,26 @@ uint32_t dmub_dcn20_get_gpint_response(struct dmub_srv 
*dmub)
  {
return REG_READ(DMCUB_SCRATCH7);
  }
+
+union dmub_fw_boot_status dmub_dcn20_get_fw_boot_status(struct dmub_srv *dmub)
+{
+   union dmub_fw_boot_status status;
+
+   status.all = REG_READ(DMCUB_SCRATCH0);
+   return status;
+}
+
+void dmub_dcn20_enable_dmub_boot_options(struct dmub_srv *dmub)
+{
+   union dmub_fw_boot_options boot_options = {0};
+
+   REG_WRITE(DMCUB_SCRATCH14, boot_options.all);
+}
+
+void dmub_dcn20_skip_dmub_panel_power_sequence(struct dmub_srv *dmub, bool 
skip)
+{
+   union dmub_fw_boot_options boot_options;
+   boot_options.all = REG_READ(DMCUB_SCRATCH14);
+   boot_options.bits.skip_phy_init_panel_sequence = skip;
+   REG_WRITE(DMCUB_SCRATCH14, boot_options.all);
+}
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h 
b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
index a316f260f6ac..d438f365cbb0 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
@@ -192,4 +192,10 @@ bool dmub_dcn20_is_gpint_acked(struct dmub_srv *dmub,
  
  uint32_t dmub_dcn20_get_gpint_response(struct dmub_srv *dmub);
  
+void dmub_dcn20_enable_dmub_boot_options(struct dmub_srv *dmub);

+
+void dmub_dcn20_skip_dmub_panel_power_sequence(struct dmub_srv *dmub, bool 
skip);
+
+union dmub_fw_boot_status dmub_dcn20_get_fw_boot_status(struct dmub_srv *dmub);
+
  #endif /* _DMUB_DCN20_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c 
b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
i

Re: [PATCH] drm/amdgpu/display: fix FP handling in DCN30

2020-11-13 Thread Kazlauskas, Nicholas

On 2020-11-12 5:06 p.m., Bhawanpreet Lakha wrote:

From: Alex Deucher 

Adjust the FP handling to avoid nested calls.

The nested calls cause the warning below
WARNING: CPU: 3 PID: 384 at arch/x86/kernel/fpu/core.c:129 kernel_fpu_begin

Fixes: 26803606c5d6 ("drm/amdgpu/display: FP fixes for DCN3.x (v4)")
Signed-off-by: Alex Deucher 
Signed-off-by: Bhawanpreet Lakha 


Reviewed-by: Nicholas Kazlauskas 

I guess dropping the noinline is fine if we're just calling it via the 
function pointer.


Regards,
Nicholas Kazlauskas


---
  .../drm/amd/display/dc/dcn30/dcn30_resource.c | 43 +++
  1 file changed, 6 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
index b379057e669c..d5c81ad55045 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
@@ -1470,20 +1470,8 @@ int dcn30_populate_dml_pipes_from_context(
return pipe_cnt;
  }
  
-/*

- * This must be noinline to ensure anything that deals with FP registers
- * is contained within this call; previously our compiling with hard-float
- * would result in fp instructions being emitted outside of the boundaries
- * of the DC_FP_START/END macros, which makes sense as the compiler has no
- * idea about what is wrapped and what is not
- *
- * This is largely just a workaround to avoid breakage introduced with 5.6,
- * ideally all fp-using code should be moved into its own file, only that
- * should be compiled with hard-float, and all code exported from there
- * should be strictly wrapped with DC_FP_START/END
- */
-static noinline void dcn30_populate_dml_writeback_from_context_fp(
-   struct dc *dc, struct resource_context *res_ctx, 
display_e2e_pipe_params_st *pipes)
+void dcn30_populate_dml_writeback_from_context(
+   struct dc *dc, struct resource_context *res_ctx, 
display_e2e_pipe_params_st *pipes)
  {
int pipe_cnt, i, j;
double max_calc_writeback_dispclk;
@@ -1571,14 +1559,6 @@ static noinline void 
dcn30_populate_dml_writeback_from_context_fp(
  
  }
  
-void dcn30_populate_dml_writeback_from_context(

-   struct dc *dc, struct resource_context *res_ctx, 
display_e2e_pipe_params_st *pipes)
-{
-   DC_FP_START();
-   dcn30_populate_dml_writeback_from_context_fp(dc, res_ctx, pipes);
-   DC_FP_END();
-}
-
  unsigned int dcn30_calc_max_scaled_time(
unsigned int time_per_pixel,
enum mmhubbub_wbif_mode mode,
@@ -1977,7 +1957,7 @@ static struct pipe_ctx *dcn30_find_split_pipe(
return pipe;
  }
  
-static bool dcn30_internal_validate_bw(

+static noinline bool dcn30_internal_validate_bw(
struct dc *dc,
struct dc_state *context,
display_e2e_pipe_params_st *pipes,
@@ -1999,6 +1979,7 @@ static bool dcn30_internal_validate_bw(
  
  	pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
  
+	DC_FP_START();

if (!pipe_cnt) {
out = true;
goto validate_out;
@@ -,6 +2203,7 @@ static bool dcn30_internal_validate_bw(
out = false;
  
  validate_out:

+   DC_FP_END();
return out;
  }
  
@@ -2404,7 +2386,7 @@ void dcn30_calculate_wm_and_dlg(

DC_FP_END();
  }
  
-static noinline bool dcn30_validate_bandwidth_fp(struct dc *dc,

+bool dcn30_validate_bandwidth(struct dc *dc,
struct dc_state *context,
bool fast_validate)
  {
@@ -2455,19 +2437,6 @@ static noinline bool dcn30_validate_bandwidth_fp(struct 
dc *dc,
return out;
  }
  
-bool dcn30_validate_bandwidth(struct dc *dc,

-   struct dc_state *context,
-   bool fast_validate)
-{
-   bool out;
-
-   DC_FP_START();
-   out = dcn30_validate_bandwidth_fp(dc, context, fast_validate);
-   DC_FP_END();
-
-   return out;
-}
-
  static noinline void get_optimal_dcfclk_fclk_for_uclk(unsigned int uclk_mts,
 unsigned int 
*optimal_dcfclk,
 unsigned int 
*optimal_fclk)



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Add missing pflip irq for dcn2.0

2020-11-13 Thread Kazlauskas, Nicholas

On 2020-11-13 2:23 a.m., Alex Deucher wrote:

If we have more than 4 displays we will run
into dummy irq calls or flip timout issues.

Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c 
b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
index 2a1fea501f8c..3f1e7a196a23 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
@@ -299,8 +299,8 @@ irq_source_info_dcn20[DAL_IRQ_SOURCES_NUMBER] = {
pflip_int_entry(1),
pflip_int_entry(2),
pflip_int_entry(3),
-   [DC_IRQ_SOURCE_PFLIP5] = dummy_irq_entry(),
-   [DC_IRQ_SOURCE_PFLIP6] = dummy_irq_entry(),
+   pflip_int_entry(4),
+   pflip_int_entry(5),
[DC_IRQ_SOURCE_PFLIP_UNDERLAY0] = dummy_irq_entry(),
gpio_pad_int_entry(0),
gpio_pad_int_entry(1),



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: add cursor pitch check

2020-11-12 Thread Kazlauskas, Nicholas

On 2020-11-12 3:56 p.m., Alex Deucher wrote:

On Thu, Nov 12, 2020 at 3:07 PM Simon Ser  wrote:


CC Daniel Vetter and Bas, see below…

On Thursday, November 12, 2020 8:56 PM, Kazlauskas, Nicholas 
 wrote:


Reviewed-by: Nicholas kazlauskasnicholas.kazlaus...@amd.com


Thanks for the review!


Couple questions:

- This implements a single check for all GPU generations. Is my
   assumption correct here? It seems like this check is OK for at least
   DCN 1.0 and DCN 2.0.

- We should really implement better checks. What features are supported
   on the cursor plane? Is scaling supported? Is cropping supported? Is
   rotation always supported?



On DCE and DCN there is no dedicated hardware cursor plane. You get a
cursor per pipe but it's going to inherit the scaling and positioning
from the underlying pipe.

There's software logic to ensure we position the cursor in the correct
location in CRTC space independent on the underlying DRM plane's scaling
and positioning but there's no way for us to correct the scaling. Cursor
will always be 64, 128, or 256 in the pipe's destination space.


Interesting.

Daniel Vetter: what would be the best way to expose this to user-space?
Maybe we should just make atomic commits with a cursor plane fail when
scaling is used on the primary plane?

Disabling the cursor plane sounds better than displaying the wrong
image.


Cursor can be independently rotated in hardware but this isn't something
we expose support for to userspace.


Hmm, I see that cursor planes have the "rotation" property exposed:

 "rotation": bitmask {rotate-0, rotate-90, rotate-180, rotate-270}

In fact all planes have it. It's done in amdgpu_dm_plane_init (behind a
`dm->adev->asic_type >= CHIP_BONAIRE` condition).

Is this an oversight?


The pitch check of 64/128/256 is OK but we don't support 256 on DCE.


Yeah, I've noticed that. The size check right above should catch it
in most cases I think, because max_cursor_size is 128 on DCE. Side
note, max_cursor_size is 64 on DCE 6.0.


Maybe something like:
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 77c06f999040..a1ea195a7041 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -8999,6 +8999,43 @@ static int dm_update_plane_state(struct dc *dc,
 return -EINVAL;
 }

+   switch(adev->family) {
+   case AMDGPU_FAMILY_SI:
+   /* DCE6 only supports 64x64 cursors */
+   if (new_plane_state->fb->pitches[0] == 64) {
+   break;
+   } else {
+   DRM_DEBUG_ATOMIC("Bad cursor pitch %d\n",
+
new_plane_state->fb->pitches[0]);
+   return -EINVAL;
+   }
+   case AMDGPU_FAMILY_KV:
+   case AMDGPU_FAMILY_CI:
+   case AMDGPU_FAMILY_CZ:
+   case AMDGPU_FAMILY_VI:
+   case AMDGPU_FAMILY_AI:
+   /* DCE8-12 only supports 64x64 and 128x128 cursors */
+   if ((new_plane_state->fb->pitches[0] == 64) ||
+   (new_plane_state->fb->pitches[0] == 128)) {
+   break;
+   } else {
+   DRM_DEBUG_ATOMIC("Bad cursor pitch %d\n",
+
new_plane_state->fb->pitches[0]);
+   return -EINVAL;
+   }
+   default:
+   /* DCN supports 64x64, 128x128, and 256x256 cursors */
+   if ((new_plane_state->fb->pitches[0] == 64) ||
+   (new_plane_state->fb->pitches[0] == 128) ||
+   (new_plane_state->fb->pitches[0] == 256)) {
+   break;
+   } else {
+   DRM_DEBUG_ATOMIC("Bad cursor pitch %d\n",
+
new_plane_state->fb->pitches[0]);
+   return -EINVAL;
+   }
+   }
+


If we're going to extend it to something like this I think that this 
should be extracted out to its own function to reduce some of this 
indenting.


I think the simpler approach here is to just block 256 if the 
max_cursor_size in amdgpu is 128.


Regards,
Nicholas Kazlauskas


 return 0;
 }




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cnicholas.kazlauskas%40amd.com%7C70872c6a2aa944b67ac408d8874d68

Re: [PATCH] drm/amd/display: add cursor pitch check

2020-11-12 Thread Kazlauskas, Nicholas

On 2020-11-12 12:37 p.m., Simon Ser wrote:

This patch expands the cursor checks added in "drm/amd/display: add basic
atomic check for cursor plane" to also include a pitch check. Without
this patch, setting a FB smaller than max_cursor_size with an invalid
pitch would result in amdgpu error messages and a fallback to a 64-byte
pitch:

 [drm:hubp1_cursor_set_attributes [amdgpu]] *ERROR* Invalid cursor pitch of 
100. Only 64/128/256 is supported on DCN.

Signed-off-by: Simon Ser 
Reported-by: Pierre-Loup A. Griffais 
Cc: Alex Deucher 
Cc: Harry Wentland 
Cc: Nicholas Kazlauskas 


Reviewed-by: Nicholas Kazlauskas 

But with some comments below:


---

Couple questions:

- This implements a single check for all GPU generations. Is my
   assumption correct here? It seems like this check is OK for at least
   DCN 1.0 and DCN 2.0.
- We should really implement better checks. What features are supported
   on the cursor plane? Is scaling supported? Is cropping supported? Is
   rotation always supported?


On DCE and DCN there is no dedicated hardware cursor plane. You get a 
cursor per pipe but it's going to inherit the scaling and positioning 
from the underlying pipe.


There's software logic to ensure we position the cursor in the correct 
location in CRTC space independent on the underlying DRM plane's scaling 
and positioning but there's no way for us to correct the scaling. Cursor 
will always be 64, 128, or 256 in the pipe's destination space.


Cursor can be independently rotated in hardware but this isn't something 
we expose support for to userspace.


The pitch check of 64/128/256 is OK but we don't support 256 on DCE.

Regards,
Nicholas Kazlauskas



  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 2855bb918535..42b0ade7de39 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -8902,6 +8902,20 @@ static int dm_update_plane_state(struct dc *dc,
return -EINVAL;
}
  
+		if (new_plane_state->fb) {

+   switch (new_plane_state->fb->pitches[0]) {
+   case 64:
+   case 128:
+   case 256:
+   /* Pitch is supported by cursor plane */
+   break;
+   default:
+   DRM_DEBUG_ATOMIC("Bad cursor pitch %d\n",
+
new_plane_state->fb->pitches[0]);
+   return -EINVAL;
+   }
+   }
+
return 0;
}
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: FP fixes for DCN3.x

2020-11-03 Thread Kazlauskas, Nicholas

On 2020-11-02 5:28 p.m., Alex Deucher wrote:

Add proper FP_START/END handling and adjust Makefiles per
previous asics.

Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/dc/clk_mgr/Makefile   | 13 
  .../drm/amd/display/dc/dcn30/dcn30_resource.c | 71 +--
  drivers/gpu/drm/amd/display/dc/dml/Makefile   |  6 +-
  3 files changed, 84 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
index facc8b970300..9f9137562cab 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
@@ -127,6 +127,19 @@ AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN30)
  
###
  CLK_MGR_DCN301 = vg_clk_mgr.o dcn301_smu.o
  
+# prevent build errors regarding soft-float vs hard-float FP ABI tags

+# this code is currently unused on ppc64, as it applies to VanGogh APUs only
+ifdef CONFIG_PPC64
+CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn301/vg_clk_mgr.o := $(call 
cc-option,-mno-gnu-attribute)
+endif
+
+# prevent build errors:
+# ...: '-mgeneral-regs-only' is incompatible with the use of floating-point 
types
+# this file is unused on arm64, just like on ppc64
+ifdef CONFIG_ARM64
+CFLAGS_REMOVE_$(AMDDALPATH)/dc/clk_mgr/dcn301/vg_clk_mgr.o := 
-mgeneral-regs-only
+endif
+
  AMD_DAL_CLK_MGR_DCN301 = $(addprefix 
$(AMDDALPATH)/dc/clk_mgr/dcn301/,$(CLK_MGR_DCN301))
  
  AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN301)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
index d65496917e93..01ac8b2921c6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
@@ -1469,7 +1469,19 @@ int dcn30_populate_dml_pipes_from_context(
return pipe_cnt;
  }
  
-void dcn30_populate_dml_writeback_from_context(

+/*
+ * This must be noinline to ensure anything that deals with FP registers
+ * is contained within this call; previously our compiling with hard-float
+ * would result in fp instructions being emitted outside of the boundaries
+ * of the DC_FP_START/END macros, which makes sense as the compiler has no
+ * idea about what is wrapped and what is not
+ *
+ * This is largely just a workaround to avoid breakage introduced with 5.6,
+ * ideally all fp-using code should be moved into its own file, only that
+ * should be compiled with hard-float, and all code exported from there
+ * should be strictly wrapped with DC_FP_START/END
+ */
+static noinline void dcn30_populate_dml_writeback_from_context_fp(
struct dc *dc, struct resource_context *res_ctx, 
display_e2e_pipe_params_st *pipes)
  {
int pipe_cnt, i, j;
@@ -1558,6 +1570,14 @@ void dcn30_populate_dml_writeback_from_context(
  
  }
  
+void dcn30_populate_dml_writeback_from_context(

+   struct dc *dc, struct resource_context *res_ctx, 
display_e2e_pipe_params_st *pipes)
+{
+   DC_FP_START();
+   dcn30_populate_dml_writeback_from_context_fp(dc, res_ctx, pipes);
+   DC_FP_END();
+}
+
  unsigned int dcn30_calc_max_scaled_time(
unsigned int time_per_pixel,
enum mmhubbub_wbif_mode mode,
@@ -2204,7 +2224,19 @@ static bool dcn30_internal_validate_bw(
return out;
  }
  
-void dcn30_calculate_wm_and_dlg(

+/*
+ * This must be noinline to ensure anything that deals with FP registers
+ * is contained within this call; previously our compiling with hard-float
+ * would result in fp instructions being emitted outside of the boundaries
+ * of the DC_FP_START/END macros, which makes sense as the compiler has no
+ * idea about what is wrapped and what is not
+ *
+ * This is largely just a workaround to avoid breakage introduced with 5.6,
+ * ideally all fp-using code should be moved into its own file, only that
+ * should be compiled with hard-float, and all code exported from there
+ * should be strictly wrapped with DC_FP_START/END
+ */
+static noinline void dcn30_calculate_wm_and_dlg_fp(
struct dc *dc, struct dc_state *context,
display_e2e_pipe_params_st *pipes,
int pipe_cnt,
@@ -2360,7 +2392,18 @@ void dcn30_calculate_wm_and_dlg(

dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us;
  }
  
-bool dcn30_validate_bandwidth(struct dc *dc,

+void dcn30_calculate_wm_and_dlg(
+   struct dc *dc, struct dc_state *context,
+   display_e2e_pipe_params_st *pipes,
+   int pipe_cnt,
+   int vlevel)
+{
+   DC_FP_START();
+   dcn30_calculate_wm_and_dlg_fp(dc, context, pipes, pipe_cnt, vlevel);
+   DC_FP_END();
+}
+
+static noinline bool dcn30_validate_bandwidth_fp(struct dc *dc,
struct dc_state *context,
 

Re: [PATCH] drm/amdgpu/display: fix warnings when CONFIG_DRM_AMD_DC_DCN is not set

2020-11-02 Thread Kazlauskas, Nicholas

On 2020-11-02 1:49 p.m., Alex Deucher wrote:

Ping?

Alex

On Tue, Oct 27, 2020 at 11:04 AM Alex Deucher  wrote:


Properly protect the relevant code with CONFIG_DRM_AMD_DC_DCN.

Fixes: 0b08c54bb7a3 ("drm/amd/display: Fix the display corruption issue on 
Navi10")
Signed-off-by: Alex Deucher 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index fdb1fa72061a..843080e4c39e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -893,6 +893,7 @@ static int dm_dmub_hw_init(struct amdgpu_device *adev)
 return 0;
  }

+#if defined(CONFIG_DRM_AMD_DC_DCN)
  static void mmhub_read_system_context(struct amdgpu_device *adev, struct 
dc_phy_addr_space_config *pa_config)
  {
 uint64_t pt_base;
@@ -945,6 +946,7 @@ static void mmhub_read_system_context(struct amdgpu_device 
*adev, struct dc_phy_
 pa_config->is_hvm_enabled = 0;

  }
+#endif

  static int amdgpu_dm_init(struct amdgpu_device *adev)
  {
@@ -952,7 +954,6 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
  #ifdef CONFIG_DRM_AMD_DC_HDCP
 struct dc_callback_init init_params;
  #endif
-   struct dc_phy_addr_space_config pa_config;
 int r;

 adev->dm.ddev = adev_to_drm(adev);
@@ -1060,6 +1061,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)

  #if defined(CONFIG_DRM_AMD_DC_DCN)
 if (adev->asic_type == CHIP_RENOIR) {
+   struct dc_phy_addr_space_config pa_config;
+
 mmhub_read_system_context(adev, &pa_config);

 // Call the DC init_memory func
--
2.25.4


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v2] drm/amd/display: Tune min clk values for MPO for RV

2020-10-30 Thread Kazlauskas, Nicholas

On 2020-10-30 2:55 a.m., Pratik Vishwakarma wrote:

[Why]
Incorrect values were resulting in flash lines
when MPO was enabled and system was left idle.

[How]
Increase min clk values only when MPO is enabled
and display is active to not affect S3 power.

Signed-off-by: Pratik Vishwakarma 
Reviewed-by: Nicholas Kazlauskas 


Feel free to merge the patch. V2 still looks OK to me.

Regards,
Nicholas Kazlauskas


---
  .../display/dc/clk_mgr/dcn10/rv1_clk_mgr.c| 30 +--
  1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
index e133edc587d3..75b8240ed059 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
@@ -187,6 +187,17 @@ static void ramp_up_dispclk_with_dpp(
clk_mgr->base.clks.max_supported_dppclk_khz = 
new_clocks->max_supported_dppclk_khz;
  }
  
+static bool is_mpo_enabled(struct dc_state *context)

+{
+   int i;
+
+   for (i = 0; i < context->stream_count; i++) {
+   if (context->stream_status[i].plane_count > 1)
+   return true;
+   }
+   return false;
+}
+
  static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
struct dc_state *context,
bool safe_to_lower)
@@ -284,9 +295,22 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
if (pp_smu->set_hard_min_fclk_by_freq &&
pp_smu->set_hard_min_dcfclk_by_freq &&
pp_smu->set_min_deep_sleep_dcfclk) {
-   pp_smu->set_hard_min_fclk_by_freq(&pp_smu->pp_smu, 
new_clocks->fclk_khz / 1000);
-   pp_smu->set_hard_min_dcfclk_by_freq(&pp_smu->pp_smu, 
new_clocks->dcfclk_khz / 1000);
-   pp_smu->set_min_deep_sleep_dcfclk(&pp_smu->pp_smu, 
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   // Only increase clocks when display is active and MPO 
is enabled
+   if (display_count && is_mpo_enabled(context)) {
+   
pp_smu->set_hard_min_fclk_by_freq(&pp_smu->pp_smu,
+   ((new_clocks->fclk_khz / 1000) 
*  101) / 100);
+   
pp_smu->set_hard_min_dcfclk_by_freq(&pp_smu->pp_smu,
+   ((new_clocks->dcfclk_khz / 
1000) * 101) / 100);
+   
pp_smu->set_min_deep_sleep_dcfclk(&pp_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   } else {
+   
pp_smu->set_hard_min_fclk_by_freq(&pp_smu->pp_smu,
+   new_clocks->fclk_khz / 1000);
+   
pp_smu->set_hard_min_dcfclk_by_freq(&pp_smu->pp_smu,
+   new_clocks->dcfclk_khz / 1000);
+   
pp_smu->set_min_deep_sleep_dcfclk(&pp_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   }
}
}
  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Tune min clk values for MPO for RV

2020-10-29 Thread Kazlauskas, Nicholas

On 2020-10-29 12:31 a.m., Pratik Vishwakarma wrote:

[Why]
Incorrect values were resulting in flash lines
when MPO was enabled and system was left idle.

[How]
Increase min clk values only when MPO is enabled
and display is active to not affect S3 power.

Signed-off-by: Pratik Vishwakarma 


Fine for now, but I'd like to understand what DCFCLK state we're in 
after this change.


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../display/dc/clk_mgr/dcn10/rv1_clk_mgr.c| 35 +--
  1 file changed, 32 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
index e133edc587d3..c388a003956b 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
@@ -187,6 +187,22 @@ static void ramp_up_dispclk_with_dpp(
clk_mgr->base.clks.max_supported_dppclk_khz = 
new_clocks->max_supported_dppclk_khz;
  }
  
+static bool is_mpo_enabled(struct dc_state *context)

+{
+   int i;
+
+   for (i = 0; i < context->stream_count; i++) {
+   if (context->stream_status[i].plane_count > 1) {
+   /*
+* No need to check for all streams as MPO
+* can be enabled on single stream only for RV.
+*/


Nitpick: We can return early here regardless since DCFCLK isn't per pipe.


+   return true;
+   }
+   }
+   return false;
+}
+
  static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
struct dc_state *context,
bool safe_to_lower)
@@ -284,9 +300,22 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
if (pp_smu->set_hard_min_fclk_by_freq &&
pp_smu->set_hard_min_dcfclk_by_freq &&
pp_smu->set_min_deep_sleep_dcfclk) {
-   pp_smu->set_hard_min_fclk_by_freq(&pp_smu->pp_smu, 
new_clocks->fclk_khz / 1000);
-   pp_smu->set_hard_min_dcfclk_by_freq(&pp_smu->pp_smu, 
new_clocks->dcfclk_khz / 1000);
-   pp_smu->set_min_deep_sleep_dcfclk(&pp_smu->pp_smu, 
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   // Only increase clocks when display is active and MPO 
is enabled
+   if (display_count && is_mpo_enabled(context)) {
+   
pp_smu->set_hard_min_fclk_by_freq(&pp_smu->pp_smu,
+   ((new_clocks->fclk_khz / 1000) 
*  101) / 100);
+   
pp_smu->set_hard_min_dcfclk_by_freq(&pp_smu->pp_smu,
+   ((new_clocks->dcfclk_khz / 
1000) * 101) / 100);
+   
pp_smu->set_min_deep_sleep_dcfclk(&pp_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   } else {
+   
pp_smu->set_hard_min_fclk_by_freq(&pp_smu->pp_smu,
+   new_clocks->fclk_khz / 1000);
+   
pp_smu->set_hard_min_dcfclk_by_freq(&pp_smu->pp_smu,
+   new_clocks->dcfclk_khz / 1000);
+   
pp_smu->set_min_deep_sleep_dcfclk(&pp_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   }
}
}
  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 0/3] drm/amd/display: Fix kernel panic by breakpoint

2020-10-26 Thread Kazlauskas, Nicholas

Reviewed-by: Nicholas Kazlauskas 

Looks fine to me. Feel free to apply.

Regards,
Nicholas Kazlauskas

On 2020-10-26 3:34 p.m., Alex Deucher wrote:

Yes, looks good to me as well.  Series is:
Acked-by: Alex Deucher 
I'll give the display guys a few more days to look this over, but if
there are no objections, I'll apply them.

Thanks!

Alex

On Fri, Oct 23, 2020 at 7:16 PM Luben Tuikov  wrote:


On 2020-10-23 03:46, Takashi Iwai wrote:

Hi,

the amdgpu driver's ASSERT_CRITICAL() macro calls the
kgdb_breakpoing() even if no debug option is set, and this leads to a
kernel panic on distro kernels.  The first two patches are the
oneliner fixes for those, while the last one is the cleanup of those
debug macros.


This looks like good work and solid. Hopefully it gets picked up.

Regards,
Luben




Takashi

===

Takashi Iwai (3):
   drm/amd/display: Fix kernel panic by dal_gpio_open() error
   drm/amd/display: Don't invoke kgdb_breakpoint() unconditionally
   drm/amd/display: Clean up debug macros

  drivers/gpu/drm/amd/display/Kconfig |  1 +
  drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c |  4 +--
  drivers/gpu/drm/amd/display/dc/os_types.h   | 33 +
  3 files changed, 15 insertions(+), 23 deletions(-)



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/display: use kvzalloc again in dc_create_state

2020-10-26 Thread Kazlauskas, Nicholas

On 2020-10-26 10:30 a.m., Alex Deucher wrote:

It looks this was accidently lost in a follow up patch.
dc context is large and we don't need contiguous pages.

Fixes: e4863f118a7d ("drm/amd/display: Multi display cause system lag on mode 
change")
Signed-off-by: Alex Deucher 
Cc: Aric Cyr 
Cc: Alex Xu 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/core/dc.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c 
b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 7ff029143722..64da60450fb0 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1564,8 +1564,8 @@ static void init_state(struct dc *dc, struct dc_state 
*context)
  
  struct dc_state *dc_create_state(struct dc *dc)

  {
-   struct dc_state *context = kzalloc(sizeof(struct dc_state),
-  GFP_KERNEL);
+   struct dc_state *context = kvzalloc(sizeof(struct dc_state),
+   GFP_KERNEL);
  
  	if (!context)

return NULL;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 07/11] drm/amd/display: Refactor surface tiling setup.

2020-10-26 Thread Kazlauskas, Nicholas

On 2020-10-21 7:31 p.m., Bas Nieuwenhuizen wrote:

Prepare for inserting modifiers based configuration, while sharing
a bunch of DCC validation & initializing the device-based configuration.

Signed-off-by: Bas Nieuwenhuizen 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 222 ++
  1 file changed, 119 insertions(+), 103 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 5a0efaefbd7f..479c886816d9 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3839,46 +3839,86 @@ static int fill_dc_scaling_info(const struct 
drm_plane_state *state,
return 0;
  }
  
-static inline uint64_t get_dcc_address(uint64_t address, uint64_t tiling_flags)

+static void
+fill_gfx8_tiling_info_from_flags(union dc_tiling_info *tiling_info,
+uint64_t tiling_flags)
  {
-   uint32_t offset = AMDGPU_TILING_GET(tiling_flags, DCC_OFFSET_256B);
+   /* Fill GFX8 params */
+   if (AMDGPU_TILING_GET(tiling_flags, ARRAY_MODE) == 
DC_ARRAY_2D_TILED_THIN1) {
+   unsigned int bankw, bankh, mtaspect, tile_split, num_banks;
+
+   bankw = AMDGPU_TILING_GET(tiling_flags, BANK_WIDTH);
+   bankh = AMDGPU_TILING_GET(tiling_flags, BANK_HEIGHT);
+   mtaspect = AMDGPU_TILING_GET(tiling_flags, MACRO_TILE_ASPECT);
+   tile_split = AMDGPU_TILING_GET(tiling_flags, TILE_SPLIT);
+   num_banks = AMDGPU_TILING_GET(tiling_flags, NUM_BANKS);
+
+   /* XXX fix me for VI */
+   tiling_info->gfx8.num_banks = num_banks;
+   tiling_info->gfx8.array_mode =
+   DC_ARRAY_2D_TILED_THIN1;
+   tiling_info->gfx8.tile_split = tile_split;
+   tiling_info->gfx8.bank_width = bankw;
+   tiling_info->gfx8.bank_height = bankh;
+   tiling_info->gfx8.tile_aspect = mtaspect;
+   tiling_info->gfx8.tile_mode =
+   DC_ADDR_SURF_MICRO_TILING_DISPLAY;
+   } else if (AMDGPU_TILING_GET(tiling_flags, ARRAY_MODE)
+   == DC_ARRAY_1D_TILED_THIN1) {
+   tiling_info->gfx8.array_mode = DC_ARRAY_1D_TILED_THIN1;
+   }
  
-	return offset ? (address + offset * 256) : 0;

+   tiling_info->gfx8.pipe_config =
+   AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
+}
+
+static void
+fill_gfx9_tiling_info_from_device(const struct amdgpu_device *adev,
+ union dc_tiling_info *tiling_info)
+{
+   tiling_info->gfx9.num_pipes =
+   adev->gfx.config.gb_addr_config_fields.num_pipes;
+   tiling_info->gfx9.num_banks =
+   adev->gfx.config.gb_addr_config_fields.num_banks;
+   tiling_info->gfx9.pipe_interleave =
+   adev->gfx.config.gb_addr_config_fields.pipe_interleave_size;
+   tiling_info->gfx9.num_shader_engines =
+   adev->gfx.config.gb_addr_config_fields.num_se;
+   tiling_info->gfx9.max_compressed_frags =
+   adev->gfx.config.gb_addr_config_fields.max_compress_frags;
+   tiling_info->gfx9.num_rb_per_se =
+   adev->gfx.config.gb_addr_config_fields.num_rb_per_se;
+   tiling_info->gfx9.shaderEnable = 1;
+#ifdef CONFIG_DRM_AMD_DC_DCN3_0
+   if (adev->asic_type == CHIP_SIENNA_CICHLID ||
+   adev->asic_type == CHIP_NAVY_FLOUNDER ||
+   adev->asic_type == CHIP_DIMGREY_CAVEFISH ||
+   adev->asic_type == CHIP_VANGOGH)
+   tiling_info->gfx9.num_pkrs = 
adev->gfx.config.gb_addr_config_fields.num_pkrs;
+#endif
  }
  
  static int

-fill_plane_dcc_attributes(struct amdgpu_device *adev,
- const struct amdgpu_framebuffer *afb,
- const enum surface_pixel_format format,
- const enum dc_rotation_angle rotation,
- const struct plane_size *plane_size,
- const union dc_tiling_info *tiling_info,
- const uint64_t info,
- struct dc_plane_dcc_param *dcc,
- struct dc_plane_address *address,
- bool force_disable_dcc)
+validate_dcc(struct amdgpu_device *adev,
+const enum surface_pixel_format format,
+const enum dc_rotation_angle rotation,
+const union dc_tiling_info *tiling_info,
+const struct dc_plane_dcc_param *dcc,
+const struct dc_plane_address *address,
+const struct plane_size *plane_size)
  {
struct dc *dc = adev->dm.dc;
struct dc_dcc_surface_param input;
struct dc_surface_dcc_cap output;
-   uint64_t plane_address = afb->address + afb->base.offsets[0];
-   uint32_t offset = AMDGPU_

Re: [PATCH v3 05/11] drm/amd/display: Store tiling_flags in the framebuffer.

2020-10-26 Thread Kazlauskas, Nicholas

On 2020-10-21 7:31 p.m., Bas Nieuwenhuizen wrote:

This moves the tiling_flags to the framebuffer creation.
This way the time of the "tiling" decision is the same as it
would be with modifiers.

Signed-off-by: Bas Nieuwenhuizen 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 48 +++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h  |  3 +
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 73 +++
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h |  2 -
  4 files changed, 59 insertions(+), 67 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 9e92d2a070ac..1a2664c3fc26 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -541,6 +541,39 @@ uint32_t amdgpu_display_supported_domains(struct 
amdgpu_device *adev,
return domain;
  }
  
+static int amdgpu_display_get_fb_info(const struct amdgpu_framebuffer *amdgpu_fb,

+ uint64_t *tiling_flags, bool *tmz_surface)
+{
+   struct amdgpu_bo *rbo;
+   int r;
+
+   if (!amdgpu_fb) {
+   *tiling_flags = 0;
+   *tmz_surface = false;
+   return 0;
+   }
+
+   rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]);
+   r = amdgpu_bo_reserve(rbo, false);
+
+   if (unlikely(r)) {
+   /* Don't show error message when returning -ERESTARTSYS */
+   if (r != -ERESTARTSYS)
+   DRM_ERROR("Unable to reserve buffer: %d\n", r);
+   return r;
+   }
+
+   if (tiling_flags)
+   amdgpu_bo_get_tiling_flags(rbo, tiling_flags);
+
+   if (tmz_surface)
+   *tmz_surface = amdgpu_bo_encrypted(rbo);
+
+   amdgpu_bo_unreserve(rbo);
+
+   return r;
+}
+
  int amdgpu_display_framebuffer_init(struct drm_device *dev,
struct amdgpu_framebuffer *rfb,
const struct drm_mode_fb_cmd2 *mode_cmd,
@@ -550,11 +583,18 @@ int amdgpu_display_framebuffer_init(struct drm_device 
*dev,
rfb->base.obj[0] = obj;
drm_helper_mode_fill_fb_struct(dev, &rfb->base, mode_cmd);
ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
-   if (ret) {
-   rfb->base.obj[0] = NULL;
-   return ret;
-   }
+   if (ret)
+   goto fail;
+
+   ret = amdgpu_display_get_fb_info(rfb, &rfb->tiling_flags, 
&rfb->tmz_surface);
+   if (ret)
+   goto fail;
+
return 0;
+
+fail:
+   rfb->base.obj[0] = NULL;
+   return ret;
  }
  
  struct drm_framebuffer *

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
index 345cb0464370..39866ed81c16 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
@@ -304,6 +304,9 @@ struct amdgpu_display_funcs {
  struct amdgpu_framebuffer {
struct drm_framebuffer base;
  
+	uint64_t tiling_flags;

+   bool tmz_surface;
+
/* caching for later use */
uint64_t address;
  };
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 833887b9b0ad..5a0efaefbd7f 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3839,39 +3839,6 @@ static int fill_dc_scaling_info(const struct 
drm_plane_state *state,
return 0;
  }
  
-static int get_fb_info(const struct amdgpu_framebuffer *amdgpu_fb,

-  uint64_t *tiling_flags, bool *tmz_surface)
-{
-   struct amdgpu_bo *rbo;
-   int r;
-
-   if (!amdgpu_fb) {
-   *tiling_flags = 0;
-   *tmz_surface = false;
-   return 0;
-   }
-
-   rbo = gem_to_amdgpu_bo(amdgpu_fb->base.obj[0]);
-   r = amdgpu_bo_reserve(rbo, false);
-
-   if (unlikely(r)) {
-   /* Don't show error message when returning -ERESTARTSYS */
-   if (r != -ERESTARTSYS)
-   DRM_ERROR("Unable to reserve buffer: %d\n", r);
-   return r;
-   }
-
-   if (tiling_flags)
-   amdgpu_bo_get_tiling_flags(rbo, tiling_flags);
-
-   if (tmz_surface)
-   *tmz_surface = amdgpu_bo_encrypted(rbo);
-
-   amdgpu_bo_unreserve(rbo);
-
-   return r;
-}
-
  static inline uint64_t get_dcc_address(uint64_t address, uint64_t 
tiling_flags)
  {
uint32_t offset = AMDGPU_TILING_GET(tiling_flags, DCC_OFFSET_256B);
@@ -4287,7 +4254,7 @@ static int fill_dc_plane_attributes(struct amdgpu_device 
*adev,
struct drm_crtc_state *crtc_state)
  {
struct dm_crtc_state *dm_crtc_state = to_dm_crtc_state(crtc_state);
-   struct dm_plane_state *dm_plane_state = to_dm_plane_state(pl

Re: [PATCH v3 03/11] drm/amd/display: Honor the offset for plane 0.

2020-10-26 Thread Kazlauskas, Nicholas

On 2020-10-21 7:31 p.m., Bas Nieuwenhuizen wrote:

With modifiers I'd like to support non-dedicated buffers for
images.

Signed-off-by: Bas Nieuwenhuizen 
Cc: sta...@vger.kernel.org # 5.1.0


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 14 +-
  1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 73987fdb6a09..833887b9b0ad 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3894,6 +3894,7 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
struct dc *dc = adev->dm.dc;
struct dc_dcc_surface_param input;
struct dc_surface_dcc_cap output;
+   uint64_t plane_address = afb->address + afb->base.offsets[0];
uint32_t offset = AMDGPU_TILING_GET(info, DCC_OFFSET_256B);
uint32_t i64b = AMDGPU_TILING_GET(info, DCC_INDEPENDENT_64B) != 0;
uint64_t dcc_address;
@@ -3937,7 +3938,7 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
AMDGPU_TILING_GET(info, DCC_PITCH_MAX) + 1;
dcc->independent_64b_blks = i64b;
  
-	dcc_address = get_dcc_address(afb->address, info);

+   dcc_address = get_dcc_address(plane_address, info);
address->grph.meta_addr.low_part = lower_32_bits(dcc_address);
address->grph.meta_addr.high_part = upper_32_bits(dcc_address);
  
@@ -3968,6 +3969,8 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,

address->tmz_surface = tmz_surface;
  
  	if (format < SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) {

+   uint64_t addr = afb->address + fb->offsets[0];
+
plane_size->surface_size.x = 0;
plane_size->surface_size.y = 0;
plane_size->surface_size.width = fb->width;
@@ -3976,9 +3979,10 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,
fb->pitches[0] / fb->format->cpp[0];
  
  		address->type = PLN_ADDR_TYPE_GRAPHICS;

-   address->grph.addr.low_part = lower_32_bits(afb->address);
-   address->grph.addr.high_part = upper_32_bits(afb->address);
+   address->grph.addr.low_part = lower_32_bits(addr);
+   address->grph.addr.high_part = upper_32_bits(addr);
} else if (format < SURFACE_PIXEL_FORMAT_INVALID) {
+   uint64_t luma_addr = afb->address + fb->offsets[0];
uint64_t chroma_addr = afb->address + fb->offsets[1];
  
  		plane_size->surface_size.x = 0;

@@ -3999,9 +4003,9 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,
  
  		address->type = PLN_ADDR_TYPE_VIDEO_PROGRESSIVE;

address->video_progressive.luma_addr.low_part =
-   lower_32_bits(afb->address);
+   lower_32_bits(luma_addr);
address->video_progressive.luma_addr.high_part =
-   upper_32_bits(afb->address);
+   upper_32_bits(luma_addr);
address->video_progressive.chroma_addr.low_part =
lower_32_bits(chroma_addr);
address->video_progressive.chroma_addr.high_part =



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 02/11] drm/amd: Init modifier field of helper fb.

2020-10-26 Thread Kazlauskas, Nicholas

On 2020-10-21 7:31 p.m., Bas Nieuwenhuizen wrote:

Otherwise the field ends up being used uninitialized when
enabling modifiers, failing validation with high likelyhood.

Signed-off-by: Bas Nieuwenhuizen 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index e2c2eb45a793..77dd2a189746 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -201,7 +201,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
struct amdgpu_device *adev = rfbdev->adev;
struct fb_info *info;
struct drm_framebuffer *fb = NULL;
-   struct drm_mode_fb_cmd2 mode_cmd;
+   struct drm_mode_fb_cmd2 mode_cmd = {0};


I think we should prefer a memset in this case. I always forget which 
compilers complain about this syntax but I know we've had to swap out a 
bunch of zero initializers to satisfy them.


Regards,
Nicholas Kazlauskas


struct drm_gem_object *gobj = NULL;
struct amdgpu_bo *abo = NULL;
int ret;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 01/11] drm/amd/display: Do not silently accept DCC for multiplane formats.

2020-10-26 Thread Kazlauskas, Nicholas

On 2020-10-21 7:31 p.m., Bas Nieuwenhuizen wrote:

Silently accepting it could result in corruption.

Signed-off-by: Bas Nieuwenhuizen 


Reviewed-by: Nicholas Kazlauskas 


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 2713caac4f2a..73987fdb6a09 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3908,7 +3908,7 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
return 0;
  
  	if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN)

-   return 0;
+   return -EINVAL;
  
  	if (!dc->cap_funcs.get_dcc_compression_cap)

return -EINVAL;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] drm/amd/display: Avoid MST manager resource leak.

2020-10-16 Thread Kazlauskas, Nicholas

On 2020-10-15 11:02 p.m., Alex Deucher wrote:

On Wed, Oct 14, 2020 at 1:25 PM Andrey Grodzovsky
 wrote:


On connector destruction call drm_dp_mst_topology_mgr_destroy
to release resources allocated in drm_dp_mst_topology_mgr_init.
Do it only if MST manager was initialized before otherwsie a crash
is seen on driver unload/device unplug.



Not really an mst expert, but this seems to match what i915 and
nouveau do.  Series is:
Acked-by: Alex Deucher 


Signed-off-by: Andrey Grodzovsky 


Looks reasonable to me. Untested, however.

Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index a72447d..64799c4 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5170,6 +5170,13 @@ static void amdgpu_dm_connector_destroy(struct 
drm_connector *connector)
 struct amdgpu_device *adev = drm_to_adev(connector->dev);
 struct amdgpu_display_manager *dm = &adev->dm;

+   /*
+* Call only if mst_mgr was initialized before since it's not done
+* for all connector types.
+*/
+   if (aconnector->mst_mgr.dev)
+   drm_dp_mst_topology_mgr_destroy(&aconnector->mst_mgr);
+
  #if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) ||\
 defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE)

--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amd/display: setup system context in dm_init

2020-10-14 Thread Kazlauskas, Nicholas

On 2020-10-14 9:20 a.m., Kazlauskas, Nicholas wrote:

On 2020-10-14 3:04 a.m., Yifan Zhang wrote:

Change-Id: I831a5ade8b87c23d21a63d08cc4d338468769e2b
Signed-off-by: Yifan Zhang 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 61 +++
  1 file changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c

index 3cf4e08931bb..aaff8800c7a0 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -887,12 +887,67 @@ static void 
amdgpu_check_debugfs_connector_property_change(struct amdgpu_device

  }
  }
+static void mmhub_read_system_context(struct amdgpu_device *adev, 
struct dc_phy_addr_space_config *pa_config)

+{
+    uint64_t pt_base;
+    uint32_t logical_addr_low;
+    uint32_t logical_addr_high;
+    uint32_t agp_base, agp_bot, agp_top;
+    PHYSICAL_ADDRESS_LOC page_table_start, page_table_end, 
page_table_base;

+
+    logical_addr_low  = min(adev->gmc.fb_start, adev->gmc.agp_start) 
>> 18;

+    pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
+
+    if (adev->apu_flags & AMD_APU_IS_RAVEN2)
+    /*
+ * Raven2 has a HW issue that it is unable to use the vram which
+ * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the
+ * workaround that increase system aperture high address (add 1)
+ * to get rid of the VM fault and hardware hang.
+ */
+    logical_addr_high = max((adev->gmc.fb_end >> 18) + 0x1, 
adev->gmc.agp_end >> 18);

+    else
+    logical_addr_high = max(adev->gmc.fb_end, adev->gmc.agp_end) 
>> 18;

+
+    agp_base = 0;
+    agp_bot = adev->gmc.agp_start >> 24;
+    agp_top = adev->gmc.agp_end >> 24;
+
+
+    page_table_start.high_part = (u32)(adev->gmc.gart_start >> 44) & 
0xF;

+    page_table_start.low_part = (u32)(adev->gmc.gart_start >> 12);
+    page_table_end.high_part = (u32)(adev->gmc.gart_end >> 44) & 0xF;
+    page_table_end.low_part = (u32)(adev->gmc.gart_end >> 12);
+    page_table_base.high_part = upper_32_bits(pt_base) & 0xF;
+    page_table_base.low_part = lower_32_bits(pt_base);
+
+    pa_config->system_aperture.start_addr = 
(uint64_t)logical_addr_low << 18;
+    pa_config->system_aperture.end_addr = (uint64_t)logical_addr_high 
<< 18;

+
+    pa_config->system_aperture.agp_base = (uint64_t)agp_base << 24 ;
+    pa_config->system_aperture.agp_bot = (uint64_t)agp_bot << 24;
+    pa_config->system_aperture.agp_top = (uint64_t)agp_top << 24;
+
+    pa_config->system_aperture.fb_base = adev->gmc.fb_start;
+    pa_config->system_aperture.fb_offset = adev->gmc.aper_base;
+    pa_config->system_aperture.fb_top = adev->gmc.fb_end;
+
+    pa_config->gart_config.page_table_start_addr = 
page_table_start.quad_part << 12;
+    pa_config->gart_config.page_table_end_addr = 
page_table_end.quad_part << 12;
+    pa_config->gart_config.page_table_base_addr = 
page_table_base.quad_part;

+
+    pa_config->is_hvm_enabled = 0;
+
+}
+
+
  static int amdgpu_dm_init(struct amdgpu_device *adev)
  {
  struct dc_init_data init_data;
  #ifdef CONFIG_DRM_AMD_DC_HDCP
  struct dc_callback_init init_params;
  #endif
+    struct dc_phy_addr_space_config pa_config;
  int r;
  adev->dm.ddev = adev_to_drm(adev);
@@ -1040,6 +1095,12 @@ static int amdgpu_dm_init(struct amdgpu_device 
*adev)

  goto error;
  }
+    mmhub_read_system_context(adev, &pa_config);
+
+    // Call the DC init_memory func
+    dc_setup_system_context(adev->dm.dc, &pa_config);
+
+


The dc_setup_system_context should come directly after dc_hardware_init().

With that fixed this series is

Reviewed-by: Nicholas Kazlauskas 

There's the vmid module as well that could be created after if needed 
but for s/g suport alone that's not necessary.


Regards,
Nicholas Kazlauskas


Actually, the commit messages could use some work too - would be good to 
have at least a brief why/how description.


Don't forget to drop the Change-Id as well.

Regards,
Nicholas Kazlauskas




  DRM_DEBUG_DRIVER("KMS initialized.\n");
  return 0;



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx 



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amd/display: setup system context in dm_init

2020-10-14 Thread Kazlauskas, Nicholas

On 2020-10-14 3:04 a.m., Yifan Zhang wrote:

Change-Id: I831a5ade8b87c23d21a63d08cc4d338468769e2b
Signed-off-by: Yifan Zhang 
---
  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 61 +++
  1 file changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 3cf4e08931bb..aaff8800c7a0 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -887,12 +887,67 @@ static void 
amdgpu_check_debugfs_connector_property_change(struct amdgpu_device
}
  }
  
+static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_addr_space_config *pa_config)

+{
+   uint64_t pt_base;
+   uint32_t logical_addr_low;
+   uint32_t logical_addr_high;
+   uint32_t agp_base, agp_bot, agp_top;
+   PHYSICAL_ADDRESS_LOC page_table_start, page_table_end, page_table_base;
+
+   logical_addr_low  = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;
+   pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
+
+   if (adev->apu_flags & AMD_APU_IS_RAVEN2)
+   /*
+* Raven2 has a HW issue that it is unable to use the vram which
+* is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the
+* workaround that increase system aperture high address (add 1)
+* to get rid of the VM fault and hardware hang.
+*/
+   logical_addr_high = max((adev->gmc.fb_end >> 18) + 0x1, 
adev->gmc.agp_end >> 18);
+   else
+   logical_addr_high = max(adev->gmc.fb_end, adev->gmc.agp_end) >> 
18;
+
+   agp_base = 0;
+   agp_bot = adev->gmc.agp_start >> 24;
+   agp_top = adev->gmc.agp_end >> 24;
+
+
+   page_table_start.high_part = (u32)(adev->gmc.gart_start >> 44) & 0xF;
+   page_table_start.low_part = (u32)(adev->gmc.gart_start >> 12);
+   page_table_end.high_part = (u32)(adev->gmc.gart_end >> 44) & 0xF;
+   page_table_end.low_part = (u32)(adev->gmc.gart_end >> 12);
+   page_table_base.high_part = upper_32_bits(pt_base) & 0xF;
+   page_table_base.low_part = lower_32_bits(pt_base);
+
+   pa_config->system_aperture.start_addr = (uint64_t)logical_addr_low << 
18;
+   pa_config->system_aperture.end_addr = (uint64_t)logical_addr_high << 18;
+
+   pa_config->system_aperture.agp_base = (uint64_t)agp_base << 24 ;
+   pa_config->system_aperture.agp_bot = (uint64_t)agp_bot << 24;
+   pa_config->system_aperture.agp_top = (uint64_t)agp_top << 24;
+
+   pa_config->system_aperture.fb_base = adev->gmc.fb_start;
+   pa_config->system_aperture.fb_offset = adev->gmc.aper_base;
+   pa_config->system_aperture.fb_top = adev->gmc.fb_end;
+
+   pa_config->gart_config.page_table_start_addr = page_table_start.quad_part 
<< 12;
+   pa_config->gart_config.page_table_end_addr = page_table_end.quad_part 
<< 12;
+   pa_config->gart_config.page_table_base_addr = page_table_base.quad_part;
+
+   pa_config->is_hvm_enabled = 0;
+
+}
+
+
  static int amdgpu_dm_init(struct amdgpu_device *adev)
  {
struct dc_init_data init_data;
  #ifdef CONFIG_DRM_AMD_DC_HDCP
struct dc_callback_init init_params;
  #endif
+   struct dc_phy_addr_space_config pa_config;
int r;
  
  	adev->dm.ddev = adev_to_drm(adev);

@@ -1040,6 +1095,12 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error;
}
  
+	mmhub_read_system_context(adev, &pa_config);

+
+   // Call the DC init_memory func
+   dc_setup_system_context(adev->dm.dc, &pa_config);
+
+


The dc_setup_system_context should come directly after dc_hardware_init().

With that fixed this series is

Reviewed-by: Nicholas Kazlauskas 

There's the vmid module as well that could be created after if needed 
but for s/g suport alone that's not necessary.


Regards,
Nicholas Kazlauskas


DRM_DEBUG_DRIVER("KMS initialized.\n");
  
  	return 0;




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Add missing function pointers for dcn3

2020-10-05 Thread Kazlauskas, Nicholas

On 2020-10-05 2:10 p.m., Bhawanpreet Lakha wrote:

These function pointers are missing from dcn30_init

.calc_vupdate_position
.set_pipe

So add them

Signed-off-by: Bhawanpreet Lakha 


Reviewed-by: Nicholas Kazlauskas 

Would be good to mention what these are used for specifically though.

The calc_vupdate_position in particular is used to help avoid cursor 
stuttering.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c 
b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c
index 7c90c506..dc312d4172af 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c
@@ -90,9 +90,11 @@ static const struct hw_sequencer_funcs dcn30_funcs = {
.init_vm_ctx = dcn20_init_vm_ctx,
.set_flip_control_gsl = dcn20_set_flip_control_gsl,
.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+   .calc_vupdate_position = dcn10_calc_vupdate_position,
.apply_idle_power_optimizations = dcn30_apply_idle_power_optimizations,
.set_backlight_level = dcn21_set_backlight_level,
.set_abm_immediate_disable = dcn21_set_abm_immediate_disable,
+   .set_pipe = dcn21_set_pipe,
  };
  
  static const struct hwseq_private_funcs dcn30_private_funcs = {




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Fix external display detection with overlay

2020-10-01 Thread Kazlauskas, Nicholas

On 2020-10-01 5:06 a.m., Pratik Vishwakarma wrote:

[Why]
When overlay plane is in use and external display
is connected, atomic check will fail.

[How]
Disable overlay plane on multi-monitor scenario
by tying it to single crtc.

Signed-off-by: Pratik Vishwakarma 


This will break overlay usage on any other CRTC other than index 1. That 
index is arbitrary and can vary based on board configuration. As-is this 
patch breaks a number of our existing IGT tests that were previously 
passing.


Userspace should really be made aware if possible to understand that 
overlays can't be left enabled after major hardware configurations - eg. 
enabling extra planes and CRTCs.


Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e8177656e083..e45c1176048a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3149,7 +3149,7 @@ static int initialize_plane(struct amdgpu_display_manager 
*dm,
 */
possible_crtcs = 1 << plane_id;
if (plane_id >= dm->dc->caps.max_streams)
-   possible_crtcs = 0xff;
+   possible_crtcs = 0x01;
  
  	ret = amdgpu_dm_plane_init(dm, plane, possible_crtcs, plane_cap);
  



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu/dc: Pixel encoding DRM property and module parameter

2020-09-28 Thread Kazlauskas, Nicholas

On 2020-09-28 3:31 p.m., Christian König wrote:

Am 28.09.20 um 19:35 schrieb James Ettle:

On Mon, 2020-09-28 at 10:26 -0400, Harry Wentland wrote:

On 2020-09-25 5:18 p.m., Alex Deucher wrote:

On Tue, Sep 22, 2020 at 4:51 PM James Ettle 
wrote:

On 22/09/2020 21:33, Alex Deucher wrote:

+/**
+ * DOC: pixel_encoding (string)
+ * Specify the initial pixel encoding used by a connector.
+ */
+static char amdgpu_pixel_encoding[MAX_INPUT];
+MODULE_PARM_DESC(pixel_encoding, "Override pixel encoding");
+module_param_string(pixel_encoding, amdgpu_pixel_encoding,
sizeof(amdgpu_pixel_encoding), 0444);

You can drop this part.  We don't need a module parameter if we
have a
kms property.

Alex

OK, but is there then an alternative means of setting the pixel
encoding to be used immediately on boot or when amdgpu loads?
Also are there user tools other than xrandr to change a KMS
property, for Wayland and console users?

You can force some things on the kernel command line, but I don't
recall whether that includes kms properties or not.  As for ways to
change properties, the KMS API provides a way.  those are exposed
via
randr when using X.  When using wayland compositors, it depends on
the
compositor.


I'm not aware of a way to specify KMS properties through the kernel
command line. I don't think it's possible.

For atomic userspace the userspace wants to be the authority on the
KMS
config. I'm not sure there's a way to set these properties with
Wayland
unless a Wayland compositor plumbs them through.

Can you summarize on a higher level what problem you're trying to
solve?
I wonder if it'd be better solved with properties on a DRM level that
all drivers can follow if desired.

Harry


Alex


The problem this is trying to solve is that the pixel format defaults
to YCbCr444 on HDMI if the monitor claims to support it, in preference
to RGB. This behaviour is hard-coded (see the
comment fill_stream_properties_from_drm_display_mode) and there is no
way for the user to change the pixel format to RGB, other than hacking
the EDID to disable the YCbCr flags.

Using YCbCr (rather than RGB) has been reported to cause suboptimal
results for some users: colour quality issues or perceptible conversion
latency at the monitor end -- see:

https://gitlab.freedesktop.org/drm/amd/-/issues/476 



for the full details.

This patch tries to solve this issue by adding a DRM property so Xorg
users can change the pixel encoding on-the-fly, and a module parameter
to set the default encoding at amdgpu's init for all users.

[I suppose an alternative here is to look into the rationale of
defaulting to YCbCr444 on HDMI when the monitor also supports RGB. For
reference although on my kit I see no detrimental effects from YCbCr,
I'm using onboard graphics with a motherboard that has just D-sub and
HDMI -- so DisplayPort's not an option.]


Ah, that problem again. Yes, that's an issue since the early fglrx days 
on linux.


Shouldn't the pixel encoding be part of the mode to run ?

Regards,
Christian.


DRM modes don't specify the encoding. The property as part of this patch 
lets userspace override it but the userspace GUI support isn't there on 
Wayland IIRC.


I'm fine with adding the properties but I don't think the module 
parameter is the right solution here. I think it's better if we try to 
get this into atomic userspace as well or revive efforts that have been 
already started before.


The problem with the module parameters is that it'd be applying a 
default to every DRM connector. No way to specify different defaults per 
DRM connector, nor do we know the full connector set at driver 
initialization. The list is dynamic and can change when you plug/unplug 
MST displays.


Regards,
Nicholas Kazlauskas





-James


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx 





___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Add missing "Copy GSL groups when committing a new context"

2020-09-16 Thread Kazlauskas, Nicholas

On 2020-09-16 1:08 p.m., Bhawanpreet Lakha wrote:

[Why]
"Copy GSL groups when committing a new context" patch was accidentally
removed during a refactor

Patch: 21ffcc94d5b ("drm/amd/display: Copy GSL groups when committing a new 
context")

[How]
Re add it

Fixes: b6e881c9474 ("drm/amd/display: update navi to use new surface programming 
behaviour")
Signed-off-by: Bhawanpreet Lakha 


Reviewed-by: Nicholas Kazlauskas 

Regards,
Nicholas Kazlauskas


---
  drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 11 +++
  1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c 
b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 5720b6e5d321..01530e686f43 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -1642,6 +1642,17 @@ void dcn20_program_front_end_for_ctx(
struct dce_hwseq *hws = dc->hwseq;
DC_LOGGER_INIT(dc->ctx->logger);
  
+	/* Carry over GSL groups in case the context is changing. */

+   for (i = 0; i < dc->res_pool->pipe_count; i++) {
+   struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+   struct pipe_ctx *old_pipe_ctx =
+   &dc->current_state->res_ctx.pipe_ctx[i];
+
+   if (pipe_ctx->stream == old_pipe_ctx->stream)
+   pipe_ctx->stream_res.gsl_group =
+   old_pipe_ctx->stream_res.gsl_group;
+   }
+
if (dc->hwss.program_triplebuffer != NULL && dc->debug.enable_tri_buf) {
for (i = 0; i < dc->res_pool->pipe_count; i++) {
struct pipe_ctx *pipe_ctx = 
&context->res_ctx.pipe_ctx[i];



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


  1   2   3   4   >