RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in SRIOV for navi12

2020-11-24 Thread Yang, Stanley
[AMD Public Use]

Hi Guchun,

Thanks for your review.

Regards,
Stanley

> -Original Message-
> From: Chen, Guchun 
> Sent: Wednesday, November 25, 2020 2:45 PM
> To: Yang, Stanley ; amd-gfx@lists.freedesktop.org;
> Chen, JingWen 
> Subject: RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd
> fw in SRIOV for navi12
> 
> [AMD Public Use]
> 
> Okay. With that fixed, the patch is:
> 
> Reviewed-by: Guchun Chen 
> 
> Regards,
> Guchun
> 
> -Original Message-
> From: Yang, Stanley 
> Sent: Tuesday, November 24, 2020 10:37 PM
> To: Chen, Guchun ; amd-
> g...@lists.freedesktop.org; Chen, JingWen 
> Subject: RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd
> fw in SRIOV for navi12
> 
> [AMD Public Use]
> 
> Hi Guchun,
> 
> This is an oversight. I forgot to remove it from patch version first.
> 
> Regards,
> Stanley
> > -Original Message-
> > From: Chen, Guchun 
> > Sent: Tuesday, November 24, 2020 9:47 PM
> > To: Yang, Stanley ;
> > amd-gfx@lists.freedesktop.org; Chen, JingWen
> 
> > Cc: Yang, Stanley 
> > Subject: RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and
> > asd fw in SRIOV for navi12
> >
> > [AMD Public Use]
> >
> > --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> > @@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr
> > *hwmgr)
> > unsigned long tools_size;
> > int ret;
> > struct cgs_firmware_info info = {0};
> > +   struct amdgpu_device *adev = hwmgr->adev;
> >
> > Why add this local variable? Looks no one is using it.
> >
> > Regards,
> > Guchun
> >
> > -Original Message-
> > From: amd-gfx  On Behalf Of
> > Stanley.Yang
> > Sent: Tuesday, November 24, 2020 5:49 PM
> > To: amd-gfx@lists.freedesktop.org; Chen, JingWen
> > 
> > Cc: Yang, Stanley 
> > Subject: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd
> > fw in SRIOV for navi12
> >
> > The KFDTopologyTest.BasicTest will failed if skip smc, sdma, sos, ta
> > and asd fw in SRIOV for vega10, so adjust above fw and skip load them
> > in SRIOV only for navi12.
> >
> > v2: remove unnecessary asic type check.
> >
> > Signed-off-by: Stanley.Yang 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c  |  3 ---
> >  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c  |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c  |  3 ---
> >  .../gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c | 13
> ++---
> > 
> >  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c   |  2 +-
> >  5 files changed, 8 insertions(+), 15 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> > b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> > index 16b551f330a4..8309dd95aa48 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> > @@ -593,9 +593,6 @@ static int sdma_v4_0_init_microcode(struct
> > amdgpu_device *adev)
> > struct amdgpu_firmware_info *info = NULL;
> > const struct common_firmware_header *header = NULL;
> >
> > -   if (amdgpu_sriov_vf(adev))
> > -   return 0;
> > -
> > DRM_DEBUG("\n");
> >
> > switch (adev->asic_type) {
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > index 9c72b95b7463..fad1cc394219 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > @@ -203,7 +203,7 @@ static int sdma_v5_0_init_microcode(struct
> > amdgpu_device *adev)
> > const struct common_firmware_header *header = NULL;
> > const struct sdma_firmware_header_v1_0 *hdr;
> >
> > -   if (amdgpu_sriov_vf(adev))
> > +   if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_NAVI12))
> > return 0;
> >
> > DRM_DEBUG("\n");
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > index cb5a6f1437f8..5ea11a0f568f 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > @@ -153,9 +153,6 @@ static int sdma_v5_2_init_microcode(struct
> > amdgpu_device *adev)
> > struct amdgpu_firmware_info *info = NULL;
> > const struct common_firmware_header *header = NULL;
> >
> > -   if (amdgpu_sriov_vf(adev))
> > -   return 0;
> > -
> > DRM_DEBUG("\n");
> >
> > switch (adev->asic_type) {
> > diff --git
> a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> > b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> > index daf122f24f23..e2192d8762a4 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> > @@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr
> > *hwmgr)
> > unsigned long tools_size;
> > int ret;
> > struct cgs_firmware_info info = {0};
> > +   struct amdgpu_device *adev = hwmgr->adev;
> >
> > -   if (!amdgpu_sriov_vf((struct 

[PATCH] drm/amdgpu: set LDS_CONFIG=0x20 on VanGogh to fix MGCG hang

2020-11-24 Thread Marek Olšák
Please review. This fixes an LDS hang that occurs with NGG, but may occur
with other shader stages too.

Thanks,
Marek
From 89f7f1ff17b851b5bd513a6c7e7c26546b775d69 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Marek=20Ol=C5=A1=C3=A1k?= 
Date: Wed, 25 Nov 2020 01:40:51 -0500
Subject: [PATCH] drm/amdgpu: set LDS_CONFIG=0x20 on VanGogh to fix MGCG hang
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Same as Sienna Cichlid and Navy Flounder.

Signed-off-by: Marek Olšák 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 841d39eb62d9..9b7cba8baf6b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -3262,6 +3262,9 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_vangogh[] =
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7, 0x0103),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0x, 0x0040),
 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x0fff, 0x00ff),
+
+	/* This is not in GDB yet. Don't remove it. It fixes a GPU hang on VanGogh. */
+	SOC15_REG_GOLDEN_VALUE(GC, 0, mmLDS_CONFIG,  0x0020, 0x0020),
 };
 
 static const struct soc15_reg_golden golden_settings_gc_10_3_4[] =
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in SRIOV for navi12

2020-11-24 Thread Chen, Guchun
[AMD Public Use]

Okay. With that fixed, the patch is:

Reviewed-by: Guchun Chen 

Regards,
Guchun

-Original Message-
From: Yang, Stanley  
Sent: Tuesday, November 24, 2020 10:37 PM
To: Chen, Guchun ; amd-gfx@lists.freedesktop.org; Chen, 
JingWen 
Subject: RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in 
SRIOV for navi12

[AMD Public Use]

Hi Guchun,

This is an oversight. I forgot to remove it from patch version first.

Regards,
Stanley
> -Original Message-
> From: Chen, Guchun 
> Sent: Tuesday, November 24, 2020 9:47 PM
> To: Yang, Stanley ; 
> amd-gfx@lists.freedesktop.org; Chen, JingWen 
> Cc: Yang, Stanley 
> Subject: RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and 
> asd fw in SRIOV for navi12
> 
> [AMD Public Use]
> 
> --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> @@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr
> *hwmgr)
>   unsigned long tools_size;
>   int ret;
>   struct cgs_firmware_info info = {0};
> + struct amdgpu_device *adev = hwmgr->adev;
> 
> Why add this local variable? Looks no one is using it.
> 
> Regards,
> Guchun
> 
> -Original Message-
> From: amd-gfx  On Behalf Of 
> Stanley.Yang
> Sent: Tuesday, November 24, 2020 5:49 PM
> To: amd-gfx@lists.freedesktop.org; Chen, JingWen 
> 
> Cc: Yang, Stanley 
> Subject: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd 
> fw in SRIOV for navi12
> 
> The KFDTopologyTest.BasicTest will failed if skip smc, sdma, sos, ta 
> and asd fw in SRIOV for vega10, so adjust above fw and skip load them 
> in SRIOV only for navi12.
> 
> v2: remove unnecessary asic type check.
> 
> Signed-off-by: Stanley.Yang 
> ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c  |  3 ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c  |  3 ---
>  .../gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c | 13 ++---
> 
>  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c   |  2 +-
>  5 files changed, 8 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index 16b551f330a4..8309dd95aa48 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -593,9 +593,6 @@ static int sdma_v4_0_init_microcode(struct 
> amdgpu_device *adev)
>   struct amdgpu_firmware_info *info = NULL;
>   const struct common_firmware_header *header = NULL;
> 
> - if (amdgpu_sriov_vf(adev))
> - return 0;
> -
>   DRM_DEBUG("\n");
> 
>   switch (adev->asic_type) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 9c72b95b7463..fad1cc394219 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -203,7 +203,7 @@ static int sdma_v5_0_init_microcode(struct 
> amdgpu_device *adev)
>   const struct common_firmware_header *header = NULL;
>   const struct sdma_firmware_header_v1_0 *hdr;
> 
> - if (amdgpu_sriov_vf(adev))
> + if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_NAVI12))
>   return 0;
> 
>   DRM_DEBUG("\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index cb5a6f1437f8..5ea11a0f568f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -153,9 +153,6 @@ static int sdma_v5_2_init_microcode(struct 
> amdgpu_device *adev)
>   struct amdgpu_firmware_info *info = NULL;
>   const struct common_firmware_header *header = NULL;
> 
> - if (amdgpu_sriov_vf(adev))
> - return 0;
> -
>   DRM_DEBUG("\n");
> 
>   switch (adev->asic_type) {
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> index daf122f24f23..e2192d8762a4 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> @@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr
> *hwmgr)
>   unsigned long tools_size;
>   int ret;
>   struct cgs_firmware_info info = {0};
> + struct amdgpu_device *adev = hwmgr->adev;
> 
> - if (!amdgpu_sriov_vf((struct amdgpu_device *)hwmgr->adev)) {
> - ret = cgs_get_firmware_info(hwmgr->device,
> - CGS_UCODE_ID_SMU,
> - );
> - if (ret || !info.kptr)
> - return -EINVAL;
> - }
> + ret = cgs_get_firmware_info(hwmgr->device,
> + CGS_UCODE_ID_SMU,
> + );
> + if (ret || !info.kptr)
> + return -EINVAL;
> 
>   priv = 

Re: [PATCH 3/6] drm/scheduler: Job timeout handler returns status

2020-11-24 Thread kernel test robot
Hi Luben,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.10-rc5 next-20201124]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Luben-Tuikov/Allow-to-extend-the-timeout-without-jobs-disappearing/20201125-111945
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
127c501a03d5db8b833e953728d3bcf53c8832a9
config: nds32-randconfig-s032-20201125 (attached as .config)
compiler: nds32le-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.3-151-g540c2c4b-dirty
# 
https://github.com/0day-ci/linux/commit/14b618148200370c3b43498550534c17d50218fc
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Luben-Tuikov/Allow-to-extend-the-timeout-without-jobs-disappearing/20201125-111945
git checkout 14b618148200370c3b43498550534c17d50218fc
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 
CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=nds32 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

>> drivers/gpu/drm/etnaviv/etnaviv_sched.c:140:18: error: initialization of 
>> 'int (*)(struct drm_sched_job *)' from incompatible pointer type 'void 
>> (*)(struct drm_sched_job *)' [-Werror=incompatible-pointer-types]
 140 |  .timedout_job = etnaviv_sched_timedout_job,
 |  ^~
   drivers/gpu/drm/etnaviv/etnaviv_sched.c:140:18: note: (near initialization 
for 'etnaviv_sched_ops.timedout_job')
   cc1: some warnings being treated as errors
--
   drivers/gpu/drm/lima/lima_sched.c: In function 'lima_sched_run_job':
   drivers/gpu/drm/lima/lima_sched.c:226:20: warning: variable 'ret' set but 
not used [-Wunused-but-set-variable]
 226 |  struct dma_fence *ret;
 |^~~
   drivers/gpu/drm/lima/lima_sched.c: At top level:
>> drivers/gpu/drm/lima/lima_sched.c:472:18: error: initialization of 'int 
>> (*)(struct drm_sched_job *)' from incompatible pointer type 'void (*)(struct 
>> drm_sched_job *)' [-Werror=incompatible-pointer-types]
 472 |  .timedout_job = lima_sched_timedout_job,
 |  ^~~
   drivers/gpu/drm/lima/lima_sched.c:472:18: note: (near initialization for 
'lima_sched_ops.timedout_job')
   cc1: some warnings being treated as errors

vim +140 drivers/gpu/drm/etnaviv/etnaviv_sched.c

e93b6deeb45a78 Lucas Stach 2017-12-04  136  
e93b6deeb45a78 Lucas Stach 2017-12-04  137  static const struct 
drm_sched_backend_ops etnaviv_sched_ops = {
e93b6deeb45a78 Lucas Stach 2017-12-04  138  .dependency = 
etnaviv_sched_dependency,
e93b6deeb45a78 Lucas Stach 2017-12-04  139  .run_job = 
etnaviv_sched_run_job,
e93b6deeb45a78 Lucas Stach 2017-12-04 @140  .timedout_job = 
etnaviv_sched_timedout_job,
e93b6deeb45a78 Lucas Stach 2017-12-04  141  .free_job = 
etnaviv_sched_free_job,
e93b6deeb45a78 Lucas Stach 2017-12-04  142  };
e93b6deeb45a78 Lucas Stach 2017-12-04  143  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Extends Tune min clk for MPO for RV

2020-11-24 Thread Pratik Vishwakarma

On 25/11/20 1:38 am, Harry Wentland wrote:

On 2020-11-24 10:55 a.m., Pratik Vishwakarma wrote:

[Why]
Changes in video resolution during playback cause
dispclk to ramp higher but sets incompatile fclk
and dcfclk values for MPO.

[How]
Check for MPO and set proper min clk values
for this case also. This was missed during previous
patch.

Signed-off-by: Pratik Vishwakarma 
---
  .../display/dc/clk_mgr/dcn10/rv1_clk_mgr.c    | 19 ---
  1 file changed, 16 insertions(+), 3 deletions(-)

diff --git 
a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c

index 75b8240ed059..ed087a9e73bb 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
@@ -275,9 +275,22 @@ static void rv1_update_clocks(struct clk_mgr 
*clk_mgr_base,

  if (pp_smu->set_hard_min_fclk_by_freq &&
  pp_smu->set_hard_min_dcfclk_by_freq &&
  pp_smu->set_min_deep_sleep_dcfclk) {
- pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu, 
new_clocks->fclk_khz / 1000);
- pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu, 
new_clocks->dcfclk_khz / 1000);
- pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu, 
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+    // Only increase clocks when display is active and MPO 
is enabled


Why do we want to only do this when MPO is enabled?

Harry


Hi Harry,

When MPO is enabled and system moves to lower clock state, clock values 
are not sufficient and we see flash lines across entire screen.


This issue is not observed when MPO is disabled or not active.

Regards,

Pratik




+    if (display_count && is_mpo_enabled(context)) {
+ pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu,
+    ((new_clocks->fclk_khz / 1000) * 101) / 100);
+ pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu,
+    ((new_clocks->dcfclk_khz / 1000) * 101) / 100);
+ pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu,
+    (new_clocks->dcfclk_deep_sleep_khz + 999) / 
1000);

+    } else {
+ pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu,
+    new_clocks->fclk_khz / 1000);
+ pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu,
+    new_clocks->dcfclk_khz / 1000);
+ pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu,
+    (new_clocks->dcfclk_deep_sleep_khz + 999) / 
1000);

+    }
  }
  }



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/1] drm/amdgpu: set default value of noretry to 1 for specified asic

2020-11-24 Thread Chen, Guchun
[AMD Public Use]

Hi @Stanley.Yang,

I assume we could not add such patch directly, as it may break the KFD test for 
Vega10 bare metal case.

Regards,
Guchun

-Original Message-
From: amd-gfx  On Behalf Of Stanley.Yang
Sent: Monday, November 23, 2020 9:14 PM
To: amd-gfx@lists.freedesktop.org
Cc: Yang, Stanley 
Subject: [PATCH 1/1] drm/amdgpu: set default value of noretry to 1 for 
specified asic

noretry = 0 casue KFDGraphicsInterop test failed on SRIOV platform for vega10, 
so set noretry to 1 for vega10.

Signed-off-by: Stanley.Yang 
Change-Id: I241da5c20970ea889909997ff044d6e61642da81
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index a3d4325718d8..7bb544224540 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -425,6 +425,7 @@ void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
struct amdgpu_gmc *gmc = >gmc;
 
switch (adev->asic_type) {
+   case CHIP_VEGA10:
case CHIP_VEGA20:
case CHIP_NAVI10:
case CHIP_NAVI14:
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=04%7C01%7Cguchun.chen%40amd.com%7C6c424996a1ec4400854a08d88fb19a4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637417340267761056%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=J44iMaigk%2BrCZ1m%2F%2FtQttrhDPrGfrBnyopA3w9sIP2o%3Dreserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 6/6] drm/sched: Make use of a "done" thread

2020-11-24 Thread Luben Tuikov
Add a "done" list to which all completed jobs are added
to be freed. The drm_sched_job_done() callback is the
producer of jobs to this list.

Add a "done" thread which consumes from the done list
and frees up jobs. Now, the main scheduler thread only
pushes jobs to the GPU and the "done" thread frees them
up, on the way out of the GPU when they've completed
execution.

Make use of the status returned by the GPU driver
timeout handler to decide whether to leave the job in
the pending list, or to send it off to the done list.
If a job is done, it is added to the done list and the
done thread woken up. If a job needs more time, it is
left on the pending list and the timeout timer
restarted.

Eliminate the polling mechanism of picking out done
jobs from the pending list, i.e. eliminate
drm_sched_get_cleanup_job(). Now the main scheduler
thread only pushes jobs down to the GPU.

Various other optimizations to the GPU scheduler
and job recovery are possible with this format.

Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/scheduler/sched_main.c | 173 +
 include/drm/gpu_scheduler.h|  14 ++
 2 files changed, 101 insertions(+), 86 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index 3eb7618a627d..289ae68cd97f 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -164,7 +164,8 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq)
  * drm_sched_job_done - complete a job
  * @s_job: pointer to the job which is done
  *
- * Finish the job's fence and wake up the worker thread.
+ * Finish the job's fence, move it to the done list,
+ * and wake up the done thread.
  */
 static void drm_sched_job_done(struct drm_sched_job *s_job)
 {
@@ -179,7 +180,12 @@ static void drm_sched_job_done(struct drm_sched_job *s_job)
dma_fence_get(_fence->finished);
drm_sched_fence_finished(s_fence);
dma_fence_put(_fence->finished);
-   wake_up_interruptible(>wake_up_worker);
+
+   spin_lock(>job_list_lock);
+   list_move(_job->list, >done_list);
+   spin_unlock(>job_list_lock);
+
+   wake_up_interruptible(>done_wait_q);
 }
 
 /**
@@ -221,11 +227,10 @@ bool drm_sched_dependency_optimized(struct dma_fence* 
fence,
 EXPORT_SYMBOL(drm_sched_dependency_optimized);
 
 /**
- * drm_sched_start_timeout - start timeout for reset worker
- *
- * @sched: scheduler instance to start the worker for
+ * drm_sched_start_timeout - start a timeout timer
+ * @sched: scheduler instance whose job we're timing
  *
- * Start the timeout for the given scheduler.
+ * Start a timeout timer for the given scheduler.
  */
 static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched)
 {
@@ -305,8 +310,8 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job)
 
spin_lock(>job_list_lock);
list_add_tail(_job->list, >pending_list);
-   drm_sched_start_timeout(sched);
spin_unlock(>job_list_lock);
+   drm_sched_start_timeout(sched);
 }
 
 static void drm_sched_job_timedout(struct work_struct *work)
@@ -316,37 +321,30 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
 
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
 
-   /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
spin_lock(>job_list_lock);
job = list_first_entry_or_null(>pending_list,
   struct drm_sched_job, list);
+   spin_unlock(>job_list_lock);
 
if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
-   spin_unlock(>job_list_lock);
+   int res;
 
-   job->sched->ops->timedout_job(job);
+   job->job_status |= DRM_JOB_STATUS_TIMEOUT;
+   res = job->sched->ops->timedout_job(job);
+   if (res == 0) {
+   /* The job is out of the device.
+*/
+   spin_lock(>job_list_lock);
+   list_move(>list, >done_list);
+   spin_unlock(>job_list_lock);
 
-   /*
-* Guilty job did complete and hence needs to be manually 
removed
-* See drm_sched_stop doc.
-*/
-   if (sched->free_guilty) {
-   job->sched->ops->free_job(job);
-   sched->free_guilty = false;
+   wake_up_interruptible(>done_wait_q);
+   } else {
+   /* The job needs more time.
+*/
+   drm_sched_start_timeout(sched);
}
-   } else {
-   

[PATCH 5/6] drm/amdgpu: Don't hardcode thread name length

2020-11-24 Thread Luben Tuikov
Introduce a macro DRM_THREAD_NAME_LEN
and use that to define ring name size,
instead of hardcoding it to 16.

Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +-
 include/drm/gpu_scheduler.h  | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 7112137689db..bbd46c6dec65 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -230,7 +230,7 @@ struct amdgpu_ring {
unsignedwptr_offs;
unsignedfence_offs;
uint64_tcurrent_ctx;
-   charname[16];
+   charname[DRM_THREAD_NAME_LEN];
u32 trail_seq;
unsignedtrail_fence_offs;
u64 trail_fence_gpu_addr;
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 61f7121e1c19..3a5686c3b5e9 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -30,6 +30,8 @@
 
 #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
 
+#define DRM_THREAD_NAME_LEN TASK_COMM_LEN
+
 struct drm_gpu_scheduler;
 struct drm_sched_rq;
 
-- 
2.29.2.154.g7f7ebe054a

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 4/6] drm/scheduler: Essentialize the job done callback

2020-11-24 Thread Luben Tuikov
The job done callback is called from various
places, in two ways: in job done role, and
as a fence callback role.

Essentialize the callback to an atom
function to just complete the job,
and into a second function as a prototype
of fence callback which calls to complete
the job.

This is used in latter patches by the completion
code.

Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/scheduler/sched_main.c | 73 ++
 1 file changed, 40 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index b694df12aaba..3eb7618a627d 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -60,8 +60,6 @@
 #define to_drm_sched_job(sched_job)\
container_of((sched_job), struct drm_sched_job, queue_node)
 
-static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb 
*cb);
-
 /**
  * drm_sched_rq_init - initialize a given run queue struct
  *
@@ -162,6 +160,40 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq)
return NULL;
 }
 
+/**
+ * drm_sched_job_done - complete a job
+ * @s_job: pointer to the job which is done
+ *
+ * Finish the job's fence and wake up the worker thread.
+ */
+static void drm_sched_job_done(struct drm_sched_job *s_job)
+{
+   struct drm_sched_fence *s_fence = s_job->s_fence;
+   struct drm_gpu_scheduler *sched = s_fence->sched;
+
+   atomic_dec(>hw_rq_count);
+   atomic_dec(>score);
+
+   trace_drm_sched_process_job(s_fence);
+
+   dma_fence_get(_fence->finished);
+   drm_sched_fence_finished(s_fence);
+   dma_fence_put(_fence->finished);
+   wake_up_interruptible(>wake_up_worker);
+}
+
+/**
+ * drm_sched_job_done_cb - the callback for a done job
+ * @f: fence
+ * @cb: fence callbacks
+ */
+static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb)
+{
+   struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, 
cb);
+
+   drm_sched_job_done(s_job);
+}
+
 /**
  * drm_sched_dependency_optimized
  *
@@ -473,14 +505,14 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, 
bool full_recovery)
 
if (fence) {
r = dma_fence_add_callback(fence, _job->cb,
-  drm_sched_process_job);
+  drm_sched_job_done_cb);
if (r == -ENOENT)
-   drm_sched_process_job(fence, _job->cb);
+   drm_sched_job_done(s_job);
else if (r)
DRM_ERROR("fence add callback failed (%d)\n",
  r);
} else
-   drm_sched_process_job(NULL, _job->cb);
+   drm_sched_job_done(s_job);
}
 
if (full_recovery) {
@@ -635,31 +667,6 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
return entity;
 }
 
-/**
- * drm_sched_process_job - process a job
- *
- * @f: fence
- * @cb: fence callbacks
- *
- * Called after job has finished execution.
- */
-static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
-{
-   struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, 
cb);
-   struct drm_sched_fence *s_fence = s_job->s_fence;
-   struct drm_gpu_scheduler *sched = s_fence->sched;
-
-   atomic_dec(>hw_rq_count);
-   atomic_dec(>score);
-
-   trace_drm_sched_process_job(s_fence);
-
-   dma_fence_get(_fence->finished);
-   drm_sched_fence_finished(s_fence);
-   dma_fence_put(_fence->finished);
-   wake_up_interruptible(>wake_up_worker);
-}
-
 /**
  * drm_sched_get_cleanup_job - fetch the next finished job to be destroyed
  *
@@ -809,9 +816,9 @@ static int drm_sched_main(void *param)
if (!IS_ERR_OR_NULL(fence)) {
s_fence->parent = dma_fence_get(fence);
r = dma_fence_add_callback(fence, _job->cb,
-  drm_sched_process_job);
+  drm_sched_job_done_cb);
if (r == -ENOENT)
-   drm_sched_process_job(fence, _job->cb);
+   drm_sched_job_done(sched_job);
else if (r)
DRM_ERROR("fence add callback failed (%d)\n",
  r);
@@ -820,7 +827,7 @@ static int drm_sched_main(void *param)
if (IS_ERR(fence))
dma_fence_set_error(_fence->finished, 
PTR_ERR(fence));
 
-   drm_sched_process_job(NULL, _job->cb);
+   drm_sched_job_done(sched_job);
}
 
wake_up(>job_scheduled);
-- 
2.29.2.154.g7f7ebe054a


[PATCH 3/6] drm/scheduler: Job timeout handler returns status

2020-11-24 Thread Luben Tuikov
The job timeout handler now returns status
indicating back to the DRM layer whether the job
was successfully cancelled or whether more time
should be given to the job to complete.

Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  6 --
 include/drm/gpu_scheduler.h | 13 ++---
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index ff48101bab55..81b73790ecc6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -28,7 +28,7 @@
 #include "amdgpu.h"
 #include "amdgpu_trace.h"
 
-static void amdgpu_job_timedout(struct drm_sched_job *s_job)
+static int amdgpu_job_timedout(struct drm_sched_job *s_job)
 {
struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
struct amdgpu_job *job = to_amdgpu_job(s_job);
@@ -41,7 +41,7 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) 
{
DRM_ERROR("ring %s timeout, but soft recovered\n",
  s_job->sched->name);
-   return;
+   return 0;
}
 
amdgpu_vm_get_task_info(ring->adev, job->pasid, );
@@ -53,10 +53,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
 
if (amdgpu_device_should_recover_gpu(ring->adev)) {
amdgpu_device_gpu_recover(ring->adev, job);
+   return 0;
} else {
drm_sched_suspend_timeout(>sched);
if (amdgpu_sriov_vf(adev))
adev->virt.tdr_debug = true;
+   return 1;
}
 }
 
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 2e0c368e19f6..61f7121e1c19 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -230,10 +230,17 @@ struct drm_sched_backend_ops {
struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
 
/**
- * @timedout_job: Called when a job has taken too long to execute,
- * to trigger GPU recovery.
+* @timedout_job: Called when a job has taken too long to execute,
+* to trigger GPU recovery.
+*
+* Return 0, if the job has been aborted successfully and will
+* never be heard of from the device. Return non-zero if the
+* job wasn't able to be aborted, i.e. if more time should be
+* given to this job. The result is not "bool" as this
+* function is not a predicate, although its result may seem
+* as one.
 */
-   void (*timedout_job)(struct drm_sched_job *sched_job);
+   int (*timedout_job)(struct drm_sched_job *sched_job);
 
/**
  * @free_job: Called once the job's finished fence has been signaled
-- 
2.29.2.154.g7f7ebe054a

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 2/6] gpu/drm: ring_mirror_list --> pending_list

2020-11-24 Thread Luben Tuikov
Rename "ring_mirror_list" to "pending_list",
to describe what something is, not what it does,
how it's used, or how the hardware implements it.

This also abstracts the actual hardware
implementation, i.e. how the low-level driver
communicates with the device it drives, ring, CAM,
etc., shouldn't be exposed to DRM.

The pending_list keeps jobs submitted, which are
out of our control. Usually this means they are
pending execution status in hardware, but the
latter definition is a more general (inclusive)
definition.

Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  2 +-
 drivers/gpu/drm/scheduler/sched_main.c  | 34 ++---
 include/drm/gpu_scheduler.h | 10 +++---
 5 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 8358cae0b5a4..db77a5bdfa45 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1427,7 +1427,7 @@ static void amdgpu_ib_preempt_job_recovery(struct 
drm_gpu_scheduler *sched)
struct dma_fence *fence;
 
spin_lock(>job_list_lock);
-   list_for_each_entry(s_job, >ring_mirror_list, list) {
+   list_for_each_entry(s_job, >pending_list, list) {
fence = sched->ops->run_job(s_job);
dma_fence_put(fence);
}
@@ -1459,7 +1459,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct 
amdgpu_ring *ring)
 
 no_preempt:
spin_lock(>job_list_lock);
-   list_for_each_entry_safe(s_job, tmp, >ring_mirror_list, list) {
+   list_for_each_entry_safe(s_job, tmp, >pending_list, list) {
if (dma_fence_is_signaled(_job->s_fence->finished)) {
/* remove job from ring_mirror_list */
list_del_init(_job->list);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4df6de81cd41..fbae600aa5f9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -4127,8 +4127,8 @@ bool amdgpu_device_has_job_running(struct amdgpu_device 
*adev)
continue;
 
spin_lock(>sched.job_list_lock);
-   job = list_first_entry_or_null(>sched.ring_mirror_list,
-   struct drm_sched_job, list);
+   job = list_first_entry_or_null(>sched.pending_list,
+  struct drm_sched_job, list);
spin_unlock(>sched.job_list_lock);
if (job)
return true;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index aca52a46b93d..ff48101bab55 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -271,7 +271,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct 
drm_gpu_scheduler *sched)
}
 
/* Signal all jobs already scheduled to HW */
-   list_for_each_entry(s_job, >ring_mirror_list, list) {
+   list_for_each_entry(s_job, >pending_list, list) {
struct drm_sched_fence *s_fence = s_job->s_fence;
 
dma_fence_set_error(_fence->finished, -EHWPOISON);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index c52eba407ebd..b694df12aaba 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -198,7 +198,7 @@ EXPORT_SYMBOL(drm_sched_dependency_optimized);
 static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched)
 {
if (sched->timeout != MAX_SCHEDULE_TIMEOUT &&
-   !list_empty(>ring_mirror_list))
+   !list_empty(>pending_list))
schedule_delayed_work(>work_tdr, sched->timeout);
 }
 
@@ -258,7 +258,7 @@ void drm_sched_resume_timeout(struct drm_gpu_scheduler 
*sched,
 {
spin_lock(>job_list_lock);
 
-   if (list_empty(>ring_mirror_list))
+   if (list_empty(>pending_list))
cancel_delayed_work(>work_tdr);
else
mod_delayed_work(system_wq, >work_tdr, remaining);
@@ -272,7 +272,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job)
struct drm_gpu_scheduler *sched = s_job->sched;
 
spin_lock(>job_list_lock);
-   list_add_tail(_job->list, >ring_mirror_list);
+   list_add_tail(_job->list, >pending_list);
drm_sched_start_timeout(sched);
spin_unlock(>job_list_lock);
 }
@@ -286,7 +286,7 @@ static void drm_sched_job_timedout(struct work_struct *work)
 
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
spin_lock(>job_list_lock);
-   job = list_first_entry_or_null(>ring_mirror_list,
+   job = 

[PATCH 0/6] Allow to extend the timeout without jobs disappearing

2020-11-24 Thread Luben Tuikov
Hi guys,

This series of patches implements a pending list for
jobs which are in the hardware, and a done list for
tasks which are done and need to be freed.

It implements a second thread, dedicated to freeing
tasks from the done list. The main scheduler thread no
longer frees (cleans up) done tasks by polling the head
of the pending list (drm_sched_get_cleanup_task() is
now gone)--it only pushes tasks down to the GPU. As
tasks complete and call their DRM callback, their
fences are signalled and tasks are queued to the done
list and the done thread woken up to free them. This
can take place concurrently with the main scheduler
thread pushing tasks down to the GPU.

When a task times out, the timeout function prototype
now is made to return a value back to DRM. The reason
for this is that the GPU driver has intimate knowledge
of the hardware and can pass back information to DRM on
what to do. Whether to attempt to abort the task (by
say calling a driver abort function, etc., as the
implementation dictates), or whether the task needs
more time. Note that the task is not moved away from
the pending list, unless it is no longer in the GPU.
(The pending list holds tasks which are pending from
DRM's point of view, i.e. the GPU has control over
them--that could be things like DMA is active, CU's are
active, for the task, etc.)

The idea really is that what DRM wants to know is
whether the task is in the GPU or not. So now
drm_sched_backend_ops::timedout_job() returns
0 of the task is no longer with the GPU, or 1
if the task needs more time.

Tested up to patch 5. Running with patch 6 seems to
make X/GDM just sleep, and I'm looking into this now.

This series applies to drm-misc-next.

Luben Tuikov (6):
  drm/scheduler: "node" --> "list"
  gpu/drm: ring_mirror_list --> pending_list
  drm/scheduler: Job timeout handler returns status
  drm/scheduler: Essentialize the job done callback
  drm/amdgpu: Don't hardcode thread name length
  drm/sched: Make use of a "done" thread

 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h|   2 +-
 drivers/gpu/drm/scheduler/sched_main.c  | 275 ++--
 include/drm/gpu_scheduler.h |  43 ++-
 6 files changed, 186 insertions(+), 152 deletions(-)

-- 
2.29.2.154.g7f7ebe054a

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/6] drm/scheduler: "node" --> "list"

2020-11-24 Thread Luben Tuikov
Rename "node" to "list" in struct drm_sched_job,
in order to make it consistent with what we see
being used throughout gpu_scheduler.h, for
instance in struct drm_sched_entity, as well as
the rest of DRM and the kernel.

Signed-off-by: Luben Tuikov 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  6 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  2 +-
 drivers/gpu/drm/scheduler/sched_main.c  | 23 +++--
 include/drm/gpu_scheduler.h |  4 ++--
 5 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 5c1f3725c741..8358cae0b5a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1427,7 +1427,7 @@ static void amdgpu_ib_preempt_job_recovery(struct 
drm_gpu_scheduler *sched)
struct dma_fence *fence;
 
spin_lock(>job_list_lock);
-   list_for_each_entry(s_job, >ring_mirror_list, node) {
+   list_for_each_entry(s_job, >ring_mirror_list, list) {
fence = sched->ops->run_job(s_job);
dma_fence_put(fence);
}
@@ -1459,10 +1459,10 @@ static void amdgpu_ib_preempt_mark_partial_job(struct 
amdgpu_ring *ring)
 
 no_preempt:
spin_lock(>job_list_lock);
-   list_for_each_entry_safe(s_job, tmp, >ring_mirror_list, node) {
+   list_for_each_entry_safe(s_job, tmp, >ring_mirror_list, list) {
if (dma_fence_is_signaled(_job->s_fence->finished)) {
/* remove job from ring_mirror_list */
-   list_del_init(_job->node);
+   list_del_init(_job->list);
sched->ops->free_job(s_job);
continue;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 7560b05e4ac1..4df6de81cd41 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -4128,7 +4128,7 @@ bool amdgpu_device_has_job_running(struct amdgpu_device 
*adev)
 
spin_lock(>sched.job_list_lock);
job = list_first_entry_or_null(>sched.ring_mirror_list,
-   struct drm_sched_job, node);
+   struct drm_sched_job, list);
spin_unlock(>sched.job_list_lock);
if (job)
return true;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index dcfe8a3b03ff..aca52a46b93d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -271,7 +271,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct 
drm_gpu_scheduler *sched)
}
 
/* Signal all jobs already scheduled to HW */
-   list_for_each_entry(s_job, >ring_mirror_list, node) {
+   list_for_each_entry(s_job, >ring_mirror_list, list) {
struct drm_sched_fence *s_fence = s_job->s_fence;
 
dma_fence_set_error(_fence->finished, -EHWPOISON);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index c6332d75025e..c52eba407ebd 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -272,7 +272,7 @@ static void drm_sched_job_begin(struct drm_sched_job *s_job)
struct drm_gpu_scheduler *sched = s_job->sched;
 
spin_lock(>job_list_lock);
-   list_add_tail(_job->node, >ring_mirror_list);
+   list_add_tail(_job->list, >ring_mirror_list);
drm_sched_start_timeout(sched);
spin_unlock(>job_list_lock);
 }
@@ -287,7 +287,7 @@ static void drm_sched_job_timedout(struct work_struct *work)
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
spin_lock(>job_list_lock);
job = list_first_entry_or_null(>ring_mirror_list,
-  struct drm_sched_job, node);
+  struct drm_sched_job, list);
 
if (job) {
/*
@@ -295,7 +295,7 @@ static void drm_sched_job_timedout(struct work_struct *work)
 * drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
 * is parked at which point it's safe.
 */
-   list_del_init(>node);
+   list_del_init(>list);
spin_unlock(>job_list_lock);
 
job->sched->ops->timedout_job(job);
@@ -392,7 +392,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct 
drm_sched_job *bad)
 * Add at the head of the queue to reflect it was the earliest
 * job extracted.
 */
-   list_add(>node, >ring_mirror_list);
+   list_add(>list, >ring_mirror_list);
 
/*
 * 

Re: [PATCH] drm/amdgpu: unpack dma_fence_chain containers during sync

2020-11-24 Thread Pierre-Loup A. Griffais
I can test some more tonight. I'll also try to prepare a standalone 
trace so you can observe the exact pattern being used on your end. 
Vulkan traces tend to be GPU and driver-specific. We'll use Mesa as the 
driver, but what GPU would be most convenient on your side for 
replaying? On our end I would guess Navi10 would be most practical.


On 11/23/20 11:56 PM, Christian König wrote:

Mhm, then I don't know what's going wrong here.

Could be that the fence somehow ends up in a BO dependency.

Pierre do you have some time for testing today? Or could you provide 
me some way to test this?


Christian.

Am 24.11.20 um 03:48 schrieb Pierre-Loup A. Griffais:


I just built my kernel with it and tested Horizon Zero Dawn on stock 
Proton 5.13, and it doesn't seem to change things there.


This pattern looks identical as with before the kernel patch, as far 
as I can tell:


https://imgur.com/a/1fZWgNG

The last purple block is a piece of GPU work on the gfx ring. It's 
been queued by software 0.7ms ago, but doesn't get put on the HW ring 
until right after the previous piece of GPU work completes. The 
orange chunk below is the 'gfx' kernel task executing, to queue it.


Thanks,

 - Pierre-Loup

On 2020-11-23 18:09, Marek Olšák wrote:

Pierre-Loup, does this do what you requested?

Thanks,
Marek

On Mon, Nov 23, 2020 at 3:17 PM Christian König 
> wrote:


That the CPU round trip is gone now.

Christian.

Am 23.11.20 um 20:49 schrieb Marek Olšák:

What is the behavior we should expect?

Marek

On Mon, Nov 23, 2020 at 7:31 AM Christian König
mailto:ckoenig.leichtzumer...@gmail.com>> wrote:

Ping, Pierre/Marek does this change works as expected?

Regards,
Christian.

Am 18.11.20 um 14:20 schrieb Christian König:
> This allows for optimizing the CPU round trip away.
>
> Signed-off-by: Christian König mailto:christian.koe...@amd.com>>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   | 2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 27

>   drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h | 1 +
>   3 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 79342976fa76..68f9a4adf5d2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1014,7 +1014,7 @@ static int
amdgpu_syncobj_lookup_and_add_to_sync(struct
amdgpu_cs_parser *p,
>               return r;
>       }
>
> -     r = amdgpu_sync_fence(>job->sync, fence);
> +     r = amdgpu_sync_fence_chain(>job->sync, fence);
>       dma_fence_put(fence);
>
>       return r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> index 8ea6c49529e7..d0d64af06f54 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> @@ -28,6 +28,8 @@
>    *    Christian König mailto:christian.koe...@amd.com>>
>    */
>
> +#include 
> +
>   #include "amdgpu.h"
>   #include "amdgpu_trace.h"
>   #include "amdgpu_amdkfd.h"
> @@ -169,6 +171,31 @@ int amdgpu_sync_fence(struct
amdgpu_sync *sync, struct dma_fence *f)
>       return 0;
>   }
>
> +/**
> + * amdgpu_sync_fence_chain - unpack dma_fence_chain and sync
> + *
> + * @sync: sync object to add fence to
> + * @f: potential dma_fence_chain to sync to.
> + *
> + * Add the fences inside the chain to the sync object.
> + */
> +int amdgpu_sync_fence_chain(struct amdgpu_sync *sync,
struct dma_fence *f)
> +{
> +     int r;
> +
> +     dma_fence_chain_for_each(f, f) {
> +             if (dma_fence_is_signaled(f))
> +                     continue;
> +
> +             r = amdgpu_sync_fence(sync, f);
> +             if (r) {
> +                     dma_fence_put(f);
> +                     return r;
> +             }
> +     }
> +     return 0;
> +}
> +
>   /**
>    * amdgpu_sync_vm_fence - remember to sync to this VM fence
>    *
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
> index 7c0fe20c470d..b142175b65b6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
> @@ -48,6 +48,7 @@ struct amdgpu_sync {
>
>   void amdgpu_sync_create(struct 

Re: [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug

2020-11-24 Thread Andrey Grodzovsky



On 11/24/20 9:49 AM, Daniel Vetter wrote:

On Sat, Nov 21, 2020 at 12:21:20AM -0500, Andrey Grodzovsky wrote:

Avoids NULL ptr due to kobj->sd being unset on device removal.

Signed-off-by: Andrey Grodzovsky 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c   | 4 +++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 4 +++-
  2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index caf828a..812e592 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -27,6 +27,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include "amdgpu.h"

  #include "amdgpu_ras.h"
@@ -1043,7 +1044,8 @@ static int amdgpu_ras_sysfs_remove_feature_node(struct 
amdgpu_device *adev)
.attrs = attrs,
};
  
-	sysfs_remove_group(>dev->kobj, );

+   if (!drm_dev_is_unplugged(>ddev))
+   sysfs_remove_group(>dev->kobj, );

This looks wrong. sysfs, like any other interface, should be
unconditionally thrown out when we do the drm_dev_unregister. Whether
hotunplugged or not should matter at all. Either this isn't needed at all,
or something is wrong with the ordering here. But definitely fishy.
-Daniel



So technically this is needed because kobejct's sysfs directory entry kobj->sd 
is set to NULL

on device removal (from sysfs_remove_dir) but because we don't finalize the 
device
until last reference to drm file is dropped (which can happen later) we end up 
calling sysfs_remove_file/dir after
this pointer is NULL. sysfs_remove_file checks for NULL and aborts while 
sysfs_remove_dir

is not and that why I guard against calls to sysfs_remove_dir.
But indeed the whole approach in the driver is incorrect, as Greg pointed out - 
we should use
default groups attributes instead of explicit calls to sysfs interface and this 
would save those troubles.
But again. the issue here of scope of work, converting all of amdgpu to default 
groups attributes is somewhat
lengthy process with extra testing as the entire driver is papered with sysfs 
references and seems to me more of a standalone
cleanup, just like switching to devm_ and drmm_ work. To me at least it seems 
that it makes more sense
to finalize and push the hot unplug patches so that this new functionality can 
be part of the driver sooner
and then incrementally improve it by working on those other topics. Just as 
devm_/drmm_ I also added sysfs cleanup

to my TODO list in the RFC patch.

Andrey




  
  	return 0;

  }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 2b7c90b..54331fc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -24,6 +24,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include "amdgpu.h"

  #include "amdgpu_ucode.h"
@@ -464,7 +465,8 @@ int amdgpu_ucode_sysfs_init(struct amdgpu_device *adev)
  
  void amdgpu_ucode_sysfs_fini(struct amdgpu_device *adev)

  {
-   sysfs_remove_group(>dev->kobj, _attr_group);
+   if (!drm_dev_is_unplugged(>ddev))
+   sysfs_remove_group(>dev->kobj, _attr_group);
  }
  
  static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,

--
2.7.4


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 00/15] drm: Move struct drm_device.pdev to legacy

2020-11-24 Thread Sam Ravnborg
Hi Thomas.

Nice clean-up series - quite an effort to move one member to deprecated!

I have read through most of the patches. I left a few notes in my
replies but nothing buggy. Just nitpicks.


On Tue, Nov 24, 2020 at 12:38:09PM +0100, Thomas Zimmermann wrote:
> The pdev field in struct drm_device points to a PCI device structure and
> goes back to UMS-only days when all DRM drivers where for PCI devices.
> Meanwhile we also support USB, SPI and platform devices. Each of those
> uses the generic device stored in struct drm_device.dev.
> 
> To reduce duplications and remove the special case of PCI, this patchset
> converts all modesetting drivers from pdev to dev and makes pdev a field
> for legacy UMS drivers.
> 
> For PCI devices, the pointer in struct drm_device.dev can be upcasted to
> struct pci_device; or tested for PCI with dev_is_pci(). In several places
> the code can use the dev field directly.
> 
> After converting all drivers and the DRM core, the pdev fields becomes
> only relevant for legacy drivers. In a later patchset, we may want to
> convert these as well and remove pdev entirely.
> 
> The patchset touches many files, but the individual changes are mostly
> trivial. I suggest to merge each driver's patch through the respective
> tree and later the rest through drm-misc-next.
> 
> Thomas Zimmermann (15):
>   drm/amdgpu: Remove references to struct drm_device.pdev
>   drm/ast: Remove references to struct drm_device.pdev
>   drm/bochs: Remove references to struct drm_device.pdev
>   drm/cirrus: Remove references to struct drm_device.pdev
>   drm/gma500: Remove references to struct drm_device.pdev
>   drm/hibmc: Remove references to struct drm_device.pdev
>   drm/mgag200: Remove references to struct drm_device.pdev
>   drm/qxl: Remove references to struct drm_device.pdev
>   drm/vboxvideo: Remove references to struct drm_device.pdev
>   drm/virtgpu: Remove references to struct drm_device.pdev
>   drm/vmwgfx: Remove references to struct drm_device.pdev
>   drm: Upcast struct drm_device.dev to struct pci_device; replace pdev
All above are:
Acked-by: Sam Ravnborg 

>   drm/nouveau: Remove references to struct drm_device.pdev
I lost my confidence in my reading of this code.

>   drm/i915: Remove references to struct drm_device.pdev
>   drm/radeon: Remove references to struct drm_device.pdev
I did not look at these at all. I hope someone else find time to do so.

Sam
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 15/15] drm: Upcast struct drm_device.dev to struct pci_device; replace pdev

2020-11-24 Thread Sam Ravnborg
Hi Thomas,

On Tue, Nov 24, 2020 at 12:38:24PM +0100, Thomas Zimmermann wrote:
> We have DRM drivers based on USB, SPI and platform devices. All of them
> are fine with storing their device reference in struct drm_device.dev.
> PCI devices should be no exception. Therefore struct drm_device.pdev is
> deprecated.
> 
> Instead upcast from struct drm_device.dev with to_pci_dev(). PCI-specific
> code can use dev_is_pci() to test for a PCI device. This patch changes
> the DRM core code and documentation accordingly. Struct drm_device.pdev
> is being moved to legacy status.
> 
> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/drm_agpsupport.c |  9 ++---
>  drivers/gpu/drm/drm_bufs.c   |  4 ++--
>  drivers/gpu/drm/drm_edid.c   |  7 ++-
>  drivers/gpu/drm/drm_irq.c| 12 +++-
>  drivers/gpu/drm/drm_pci.c| 26 +++---
>  drivers/gpu/drm/drm_vm.c |  2 +-
>  include/drm/drm_device.h | 12 +---
>  7 files changed, 46 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_agpsupport.c 
> b/drivers/gpu/drm/drm_agpsupport.c
> index 4c7ad46fdd21..a4040fe4f4ba 100644
> --- a/drivers/gpu/drm/drm_agpsupport.c
> +++ b/drivers/gpu/drm/drm_agpsupport.c
> @@ -103,11 +103,13 @@ int drm_agp_info_ioctl(struct drm_device *dev, void 
> *data,
>   */
>  int drm_agp_acquire(struct drm_device *dev)
>  {
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
> +
>   if (!dev->agp)
>   return -ENODEV;
>   if (dev->agp->acquired)
>   return -EBUSY;
> - dev->agp->bridge = agp_backend_acquire(dev->pdev);
> + dev->agp->bridge = agp_backend_acquire(pdev);
>   if (!dev->agp->bridge)
>   return -ENODEV;
>   dev->agp->acquired = 1;
> @@ -402,14 +404,15 @@ int drm_agp_free_ioctl(struct drm_device *dev, void 
> *data,
>   */
>  struct drm_agp_head *drm_agp_init(struct drm_device *dev)
>  {
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
>   struct drm_agp_head *head = NULL;
>  
>   head = kzalloc(sizeof(*head), GFP_KERNEL);
>   if (!head)
>   return NULL;
> - head->bridge = agp_find_bridge(dev->pdev);
> + head->bridge = agp_find_bridge(pdev);
>   if (!head->bridge) {
> - head->bridge = agp_backend_acquire(dev->pdev);
> + head->bridge = agp_backend_acquire(pdev);
>   if (!head->bridge) {
>   kfree(head);
>   return NULL;
> diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c
> index 7a01d0918861..1da8b360b60a 100644
> --- a/drivers/gpu/drm/drm_bufs.c
> +++ b/drivers/gpu/drm/drm_bufs.c
> @@ -325,7 +325,7 @@ static int drm_addmap_core(struct drm_device *dev, 
> resource_size_t offset,
>* As we're limiting the address to 2^32-1 (or less),
>* casting it down to 32 bits is no problem, but we
>* need to point to a 64bit variable first. */
> - map->handle = dma_alloc_coherent(>pdev->dev,
> + map->handle = dma_alloc_coherent(dev->dev,
>map->size,
>>offset,
>GFP_KERNEL);
> @@ -555,7 +555,7 @@ int drm_legacy_rmmap_locked(struct drm_device *dev, 
> struct drm_local_map *map)
>   case _DRM_SCATTER_GATHER:
>   break;
>   case _DRM_CONSISTENT:
> - dma_free_coherent(>pdev->dev,
> + dma_free_coherent(dev->dev,
> map->size,
> map->handle,
> map->offset);
> diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
> index 74f5a3197214..555a04ce2179 100644
> --- a/drivers/gpu/drm/drm_edid.c
> +++ b/drivers/gpu/drm/drm_edid.c
> @@ -32,6 +32,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  
> @@ -2075,9 +2076,13 @@ EXPORT_SYMBOL(drm_get_edid);
>  struct edid *drm_get_edid_switcheroo(struct drm_connector *connector,
>struct i2c_adapter *adapter)
>  {
> - struct pci_dev *pdev = connector->dev->pdev;
> + struct drm_device *dev = connector->dev;
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
>   struct edid *edid;

Maybe add a comment that explain why this can trigger - so people are
helped it they are catched by this.
As it is now it is not even mentioned in the changelog.

> + if (drm_WARN_ON_ONCE(dev, !dev_is_pci(dev->dev)))
> + return NULL;
> +
>   vga_switcheroo_lock_ddc(pdev);
>   edid = drm_get_edid(connector, adapter);
>   vga_switcheroo_unlock_ddc(pdev);
> diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
> index 09d6e9e2e075..22986a9a593b 100644
> --- a/drivers/gpu/drm/drm_irq.c
> +++ b/drivers/gpu/drm/drm_irq.c
> @@ -122,7 +122,7 @@ int drm_irq_install(struct drm_device *dev, int 

Re: [PATCH 09/15] drm/nouveau: Remove references to struct drm_device.pdev

2020-11-24 Thread Sam Ravnborg
Hi Thomas.

On Tue, Nov 24, 2020 at 12:38:18PM +0100, Thomas Zimmermann wrote:
> Using struct drm_device.pdev is deprecated. Convert nouveau to struct
> drm_device.dev. No functional changes.
> 
> Signed-off-by: Thomas Zimmermann 
> Cc: Ben Skeggs 

Suggestion to an alternative implmentation below.

> ---
>  drivers/gpu/drm/nouveau/dispnv04/arb.c  | 12 +++-
>  drivers/gpu/drm/nouveau/dispnv04/disp.h | 14 --
>  drivers/gpu/drm/nouveau/dispnv04/hw.c   | 10 ++
>  drivers/gpu/drm/nouveau/nouveau_abi16.c |  7 ---
>  drivers/gpu/drm/nouveau/nouveau_acpi.c  |  2 +-
>  drivers/gpu/drm/nouveau/nouveau_bios.c  | 11 ---
>  drivers/gpu/drm/nouveau/nouveau_connector.c | 10 ++
>  drivers/gpu/drm/nouveau/nouveau_drm.c   |  5 ++---
>  drivers/gpu/drm/nouveau/nouveau_fbcon.c |  6 --
>  drivers/gpu/drm/nouveau/nouveau_vga.c   | 20 
>  10 files changed, 58 insertions(+), 39 deletions(-)
> 

> diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.c 
> b/drivers/gpu/drm/nouveau/nouveau_bios.c
> index d204ea8a5618..7cc683b8dc7a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bios.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_bios.c
> @@ -110,6 +110,9 @@ static int call_lvds_manufacturer_script(struct 
> drm_device *dev, struct dcb_outp
>   struct nvbios *bios = >vbios;
>   uint8_t sub = bios->data[bios->fp.xlated_entry + script] + 
> (bios->fp.link_c_increment && dcbent->or & DCB_OUTPUT_C ? 1 : 0);
>   uint16_t scriptofs = ROM16(bios->data[bios->init_script_tbls_ptr + sub 
> * 2]);
> +#ifdef __powerpc__
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
> +#endif
Or
int device = 0;
>  
>   if (!bios->fp.xlated_entry || !sub || !scriptofs)
>   return -EINVAL;
> @@ -123,8 +126,8 @@ static int call_lvds_manufacturer_script(struct 
> drm_device *dev, struct dcb_outp
>  #ifdef __powerpc__
>   /* Powerbook specific quirks */
device = to_pci_dev(dev->dev)->device;
if (script == LVDS_RESET && (device == 0x0179 || device == 0x0189 || 
device == 0x0329))

>   if (script == LVDS_RESET &&
> - (dev->pdev->device == 0x0179 || dev->pdev->device == 0x0189 ||
> -  dev->pdev->device == 0x0329))
> + (pdev->device == 0x0179 || pdev->device == 0x0189 ||
> +  pdev->device == 0x0329))
>   nv_write_tmds(dev, dcbent->or, 0, 0x02, 0x72);
>  #endif
>  


> diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c 
> b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
> index 24ec5339efb4..4fc0fa696461 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
> @@ -396,7 +396,9 @@ nouveau_fbcon_create(struct drm_fb_helper *helper,
>   NV_INFO(drm, "allocated %dx%d fb: 0x%llx, bo %p\n",
>   fb->width, fb->height, nvbo->offset, nvbo);
>  
> - vga_switcheroo_client_fb_set(dev->pdev, info);
> + if (dev_is_pci(dev->dev))
> + vga_switcheroo_client_fb_set(to_pci_dev(dev->dev), info);
> +
I cannot see why dev_is_pci() is needed here.
So I am obviously missing something :-(

>   return 0;
>  
>  out_unlock:
> @@ -548,7 +550,7 @@ nouveau_fbcon_init(struct drm_device *dev)
>   int ret;
>  
>   if (!dev->mode_config.num_crtc ||
> - (dev->pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
> + (to_pci_dev(dev->dev)->class >> 8) != PCI_CLASS_DISPLAY_VGA)
>   return 0;
>  
>   fbcon = kzalloc(sizeof(struct nouveau_fbdev), GFP_KERNEL);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_vga.c 
> b/drivers/gpu/drm/nouveau/nouveau_vga.c
> index c85dd8afa3c3..7c4b374b3eca 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_vga.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_vga.c
> @@ -87,18 +87,20 @@ nouveau_vga_init(struct nouveau_drm *drm)
>  {
>   struct drm_device *dev = drm->dev;
>   bool runtime = nouveau_pmops_runtime();
> + struct pci_dev *pdev;
>  
>   /* only relevant for PCI devices */
> - if (!dev->pdev)
> + if (!dev_is_pci(dev->dev))
>   return;
> + pdev = to_pci_dev(dev->dev);
>  
> - vga_client_register(dev->pdev, dev, NULL, nouveau_vga_set_decode);
> + vga_client_register(pdev, dev, NULL, nouveau_vga_set_decode);
>  
>   /* don't register Thunderbolt eGPU with vga_switcheroo */
> - if (pci_is_thunderbolt_attached(dev->pdev))
> + if (pci_is_thunderbolt_attached(pdev))
>   return;
>  
> - vga_switcheroo_register_client(dev->pdev, _switcheroo_ops, 
> runtime);
> + vga_switcheroo_register_client(pdev, _switcheroo_ops, runtime);
>  
>   if (runtime && nouveau_is_v1_dsm() && !nouveau_is_optimus())
>   vga_switcheroo_init_domain_pm_ops(drm->dev->dev, 
> >vga_pm_domain);
> @@ -109,17 +111,19 @@ nouveau_vga_fini(struct nouveau_drm *drm)
>  {
>   struct drm_device *dev = drm->dev;
>   bool runtime = nouveau_pmops_runtime();
> + struct pci_dev *pdev;
>  
>   /* only 

Re: [PATCH 05/15] drm/gma500: Remove references to struct drm_device.pdev

2020-11-24 Thread Sam Ravnborg
Hi Thomas.

On Tue, Nov 24, 2020 at 12:38:14PM +0100, Thomas Zimmermann wrote:
> Using struct drm_device.pdev is deprecated. Convert gma500 to struct
> drm_device.dev. No functional changes.
> 
> Signed-off-by: Thomas Zimmermann 
> Cc: Patrik Jakobsson 

This patch includes several whitespace changes too.
It would be nice to avoid these as the patch is already large enough.

Browsing the patch it was not so many, it looked like more in the start
of the patch.

Sam

> ---
>  drivers/gpu/drm/gma500/cdv_device.c| 30 +++---
>  drivers/gpu/drm/gma500/cdv_intel_crt.c |  3 +-
>  drivers/gpu/drm/gma500/cdv_intel_lvds.c|  4 +--
>  drivers/gpu/drm/gma500/framebuffer.c   |  9 +++---
>  drivers/gpu/drm/gma500/gma_device.c|  3 +-
>  drivers/gpu/drm/gma500/gma_display.c   |  4 +--
>  drivers/gpu/drm/gma500/gtt.c   | 20 ++--
>  drivers/gpu/drm/gma500/intel_bios.c|  6 ++--
>  drivers/gpu/drm/gma500/intel_gmbus.c   |  4 +--
>  drivers/gpu/drm/gma500/intel_i2c.c |  2 +-
>  drivers/gpu/drm/gma500/mdfld_device.c  |  4 ++-
>  drivers/gpu/drm/gma500/mdfld_dsi_dpi.c |  8 ++---
>  drivers/gpu/drm/gma500/mid_bios.c  |  9 --
>  drivers/gpu/drm/gma500/oaktrail_device.c   |  5 +--
>  drivers/gpu/drm/gma500/oaktrail_lvds.c |  2 +-
>  drivers/gpu/drm/gma500/oaktrail_lvds_i2c.c |  2 +-
>  drivers/gpu/drm/gma500/opregion.c  |  3 +-
>  drivers/gpu/drm/gma500/power.c | 13 
>  drivers/gpu/drm/gma500/psb_drv.c   | 16 +-
>  drivers/gpu/drm/gma500/psb_drv.h   |  8 ++---
>  drivers/gpu/drm/gma500/psb_intel_lvds.c|  6 ++--
>  drivers/gpu/drm/gma500/psb_intel_sdvo.c|  2 +-
>  drivers/gpu/drm/gma500/tc35876x-dsi-lvds.c | 36 +++---
>  23 files changed, 109 insertions(+), 90 deletions(-)
> 
> diff --git a/drivers/gpu/drm/gma500/cdv_device.c 
> b/drivers/gpu/drm/gma500/cdv_device.c
> index e75293e4a52f..19e055dbd4c2 100644
> --- a/drivers/gpu/drm/gma500/cdv_device.c
> +++ b/drivers/gpu/drm/gma500/cdv_device.c
> @@ -95,13 +95,14 @@ static u32 cdv_get_max_backlight(struct drm_device *dev)
>  static int cdv_get_brightness(struct backlight_device *bd)
>  {
>   struct drm_device *dev = bl_get_data(bd);
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
>   u32 val = REG_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
>  
>   if (cdv_backlight_combination_mode(dev)) {
>   u8 lbpc;
>  
>   val &= ~1;
> - pci_read_config_byte(dev->pdev, 0xF4, );
> + pci_read_config_byte(pdev, 0xF4, );
>   val *= lbpc;
>   }
>   return (val * 100)/cdv_get_max_backlight(dev);
> @@ -111,6 +112,7 @@ static int cdv_get_brightness(struct backlight_device *bd)
>  static int cdv_set_brightness(struct backlight_device *bd)
>  {
>   struct drm_device *dev = bl_get_data(bd);
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
>   int level = bd->props.brightness;
>   u32 blc_pwm_ctl;
>  
> @@ -128,7 +130,7 @@ static int cdv_set_brightness(struct backlight_device *bd)
>   lbpc = level * 0xfe / max + 1;
>   level /= lbpc;
>  
> - pci_write_config_byte(dev->pdev, 0xF4, lbpc);
> + pci_write_config_byte(pdev, 0xF4, lbpc);
>   }
>  
>   blc_pwm_ctl = REG_READ(BLC_PWM_CTL) & ~BACKLIGHT_DUTY_CYCLE_MASK;
> @@ -205,8 +207,9 @@ static inline void CDV_MSG_WRITE32(int domain, uint port, 
> uint offset,
>  static void cdv_init_pm(struct drm_device *dev)
>  {
>   struct drm_psb_private *dev_priv = dev->dev_private;
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
>   u32 pwr_cnt;
> - int domain = pci_domain_nr(dev->pdev->bus);
> + int domain = pci_domain_nr(pdev->bus);
>   int i;
>  
>   dev_priv->apm_base = CDV_MSG_READ32(domain, PSB_PUNIT_PORT,
> @@ -234,6 +237,8 @@ static void cdv_init_pm(struct drm_device *dev)
>  
>  static void cdv_errata(struct drm_device *dev)
>  {
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
> +
>   /* Disable bonus launch.
>*  CPU and GPU competes for memory and display misses updates and
>*  flickers. Worst with dual core, dual displays.
> @@ -242,7 +247,7 @@ static void cdv_errata(struct drm_device *dev)
>*  Bonus Launch to work around the issue, by degrading
>*  performance.
>*/
> -  CDV_MSG_WRITE32(pci_domain_nr(dev->pdev->bus), 3, 0x30, 0x08027108);
> +  CDV_MSG_WRITE32(pci_domain_nr(pdev->bus), 3, 0x30, 0x08027108);
>  }
>  
>  /**
> @@ -255,12 +260,13 @@ static void cdv_errata(struct drm_device *dev)
>  static int cdv_save_display_registers(struct drm_device *dev)
>  {
>   struct drm_psb_private *dev_priv = dev->dev_private;
> + struct pci_dev *pdev = to_pci_dev(dev->dev);
>   struct psb_save_area *regs = _priv->regs;
>   struct drm_connector *connector;
>  
>   dev_dbg(dev->dev, "Saving GPU registers.\n");
> 

Re: [PATCH] drm/amd/display: Extends Tune min clk for MPO for RV

2020-11-24 Thread Harry Wentland

On 2020-11-24 10:55 a.m., Pratik Vishwakarma wrote:

[Why]
Changes in video resolution during playback cause
dispclk to ramp higher but sets incompatile fclk
and dcfclk values for MPO.

[How]
Check for MPO and set proper min clk values
for this case also. This was missed during previous
patch.

Signed-off-by: Pratik Vishwakarma 
---
  .../display/dc/clk_mgr/dcn10/rv1_clk_mgr.c| 19 ---
  1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
index 75b8240ed059..ed087a9e73bb 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
@@ -275,9 +275,22 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
if (pp_smu->set_hard_min_fclk_by_freq &&
pp_smu->set_hard_min_dcfclk_by_freq &&
pp_smu->set_min_deep_sleep_dcfclk) {
-   pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu, 
new_clocks->fclk_khz / 1000);
-   pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu, 
new_clocks->dcfclk_khz / 1000);
-   pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu, 
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   // Only increase clocks when display is active and MPO 
is enabled


Why do we want to only do this when MPO is enabled?

Harry


+   if (display_count && is_mpo_enabled(context)) {
+   
pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu,
+   ((new_clocks->fclk_khz / 1000) 
*  101) / 100);
+   
pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu,
+   ((new_clocks->dcfclk_khz / 
1000) * 101) / 100);
+   
pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   } else {
+   
pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu,
+   new_clocks->fclk_khz / 1000);
+   
pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu,
+   new_clocks->dcfclk_khz / 1000);
+   
pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   }
}
}
  


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged.

2020-11-24 Thread Luben Tuikov
On 2020-11-24 12:40 p.m., Christian König wrote:
> Am 24.11.20 um 18:11 schrieb Luben Tuikov:
>> On 2020-11-24 2:50 a.m., Christian König wrote:
>>> Am 24.11.20 um 02:12 schrieb Luben Tuikov:
 On 2020-11-23 3:06 a.m., Christian König wrote:
> Am 23.11.20 um 06:37 schrieb Andrey Grodzovsky:
>> On 11/22/20 6:57 AM, Christian König wrote:
>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
 No point to try recovery if device is gone, it's meaningless.
>>> I think that this should go into the device specific recovery
>>> function and not in the scheduler.
>> The timeout timer is rearmed here, so this prevents any new recovery
>> work to restart from here
>> after drm_dev_unplug was executed from amdgpu_pci_remove.It will not
>> cover other places like
>> job cleanup or starting new job but those should stop once the
>> scheduler thread is stopped later.
> Yeah, but this is rather unclean. We should probably return an error
> code instead if the timer should be rearmed or not.
 Christian, this is exactly my work I told you about
 last week on Wednesday in our weekly meeting. And
 which I wrote to you in an email last year about this
 time.
>>> Yeah, that's why I'm suggesting it here as well.
>> It seems you're suggesting that Andrey do it, while
>> all too well you know I've been working on this
>> for some time now.
> 
> Changing the return value is just a minimal change and I didn't want to 
> block Andrey in any way.
> 

But it is the suggestion I had last year this time.
It is the whole root of my changes--it's a gamechanger.

>>
>> I wrote you about this last year same time
>> in an email. And I discussed it on the Wednesday
>> meeting.
>>
>> You could've mentioned that here the first time.
>>
 So what do we do now?
>>> Split your patches into smaller parts and submit them chunk by chunk.
>>>
>>> E.g. renames first and then functional changes grouped by area they change.
>> I have, but my final patch, a tiny one but which implements
>> the core reason for the change seems buggy, and I'm looking
>> for a way to debug it.
> 
> Just send it out in chunks, e.g. non functional changes like renames 
> shouldn't cause any problems and having them in the branch early 
> minimizes conflicts with work from others.

Yeah, I agree, that's a good idea.

My final tiny patch is causing me grief and I'd rather
have had it working. :'-(

Regards,
Luben

> 
> Regards,
> Christian.
> 
>>
>> Regards,
>> Luben
>>
>>
>>> Regards,
>>> Christian.
>>>
 I can submit those changes without the last part,
 which builds on this change.

 I'm still testing the last part and was hoping
 to submit it all in one sequence of patches,
 after my testing.

 Regards,
 Luben

> Christian.
>
>> Andrey
>>
>>
>>> Christian.
>>>
 Signed-off-by: Andrey Grodzovsky 
 ---
     drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
     drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  3 ++-
     drivers/gpu/drm/lima/lima_sched.c |  3 ++-
     drivers/gpu/drm/panfrost/panfrost_job.c   |  2 +-
     drivers/gpu/drm/scheduler/sched_main.c    | 15 ++-
     drivers/gpu/drm/v3d/v3d_sched.c   | 15 ++-
     include/drm/gpu_scheduler.h   |  6 +-
     7 files changed, 35 insertions(+), 11 deletions(-)

 diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 index d56f402..d0b0021 100644
 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 @@ -487,7 +487,7 @@ int amdgpu_fence_driver_init_ring(struct
 amdgpu_ring *ring,
       r = drm_sched_init(>sched, _sched_ops,
    num_hw_submission, amdgpu_job_hang_limit,
 -   timeout, ring->name);
 +   timeout, ring->name, >ddev);
     if (r) {
     DRM_ERROR("Failed to create scheduler on ring %s.\n",
       ring->name);
 diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 index cd46c88..7678287 100644
 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 @@ -185,7 +185,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
       ret = drm_sched_init(>sched, _sched_ops,
      etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
 - msecs_to_jiffies(500), dev_name(gpu->dev));
 + msecs_to_jiffies(500), dev_name(gpu->dev),
 + gpu->drm);
     if (ret)
     return ret;
     diff --git 

Re: [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged.

2020-11-24 Thread Luben Tuikov
On 2020-11-24 12:17 p.m., Andrey Grodzovsky wrote:
> 
> On 11/24/20 12:11 PM, Luben Tuikov wrote:
>> On 2020-11-24 2:50 a.m., Christian König wrote:
>>> Am 24.11.20 um 02:12 schrieb Luben Tuikov:
 On 2020-11-23 3:06 a.m., Christian König wrote:
> Am 23.11.20 um 06:37 schrieb Andrey Grodzovsky:
>> On 11/22/20 6:57 AM, Christian König wrote:
>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
 No point to try recovery if device is gone, it's meaningless.
>>> I think that this should go into the device specific recovery
>>> function and not in the scheduler.
>> The timeout timer is rearmed here, so this prevents any new recovery
>> work to restart from here
>> after drm_dev_unplug was executed from amdgpu_pci_remove.It will not
>> cover other places like
>> job cleanup or starting new job but those should stop once the
>> scheduler thread is stopped later.
> Yeah, but this is rather unclean. We should probably return an error
> code instead if the timer should be rearmed or not.
 Christian, this is exactly my work I told you about
 last week on Wednesday in our weekly meeting. And
 which I wrote to you in an email last year about this
 time.
>>> Yeah, that's why I'm suggesting it here as well.
>> It seems you're suggesting that Andrey do it, while
>> all too well you know I've been working on this
>> for some time now.
>>
>> I wrote you about this last year same time
>> in an email. And I discussed it on the Wednesday
>> meeting.
>>
>> You could've mentioned that here the first time.
> 
> 
> Luben, I actually strongly prefer that you do it and share ur patch with me 
> since I don't
> want to do unneeded refactoring which will conflict with with ur work. Also, 
> please
> usedrm-misc for this since it's not amdgpu specific work and will be easier 
> for me.
> 
> Andrey

No problem, Andrey--will do.

Regards,
Luben

> 
> 
>>
 So what do we do now?
>>> Split your patches into smaller parts and submit them chunk by chunk.
>>>
>>> E.g. renames first and then functional changes grouped by area they change.
>> I have, but my final patch, a tiny one but which implements
>> the core reason for the change seems buggy, and I'm looking
>> for a way to debug it.
>>
>> Regards,
>> Luben
>>
>>
>>> Regards,
>>> Christian.
>>>
 I can submit those changes without the last part,
 which builds on this change.

 I'm still testing the last part and was hoping
 to submit it all in one sequence of patches,
 after my testing.

 Regards,
 Luben

> Christian.
>
>> Andrey
>>
>>
>>> Christian.
>>>
 Signed-off-by: Andrey Grodzovsky 
 ---
     drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
     drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  3 ++-
     drivers/gpu/drm/lima/lima_sched.c |  3 ++-
     drivers/gpu/drm/panfrost/panfrost_job.c   |  2 +-
     drivers/gpu/drm/scheduler/sched_main.c    | 15 ++-
     drivers/gpu/drm/v3d/v3d_sched.c   | 15 ++-
     include/drm/gpu_scheduler.h   |  6 +-
     7 files changed, 35 insertions(+), 11 deletions(-)

 diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 index d56f402..d0b0021 100644
 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
 @@ -487,7 +487,7 @@ int amdgpu_fence_driver_init_ring(struct
 amdgpu_ring *ring,
       r = drm_sched_init(>sched, _sched_ops,
    num_hw_submission, amdgpu_job_hang_limit,
 -   timeout, ring->name);
 +   timeout, ring->name, >ddev);
     if (r) {
     DRM_ERROR("Failed to create scheduler on ring %s.\n",
       ring->name);
 diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 index cd46c88..7678287 100644
 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
 @@ -185,7 +185,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
       ret = drm_sched_init(>sched, _sched_ops,
      etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
 - msecs_to_jiffies(500), dev_name(gpu->dev));
 + msecs_to_jiffies(500), dev_name(gpu->dev),
 + gpu->drm);
     if (ret)
     return ret;
     diff --git a/drivers/gpu/drm/lima/lima_sched.c
 b/drivers/gpu/drm/lima/lima_sched.c
 index dc6df9e..8a7e5d7ca 100644
 --- a/drivers/gpu/drm/lima/lima_sched.c
 +++ b/drivers/gpu/drm/lima/lima_sched.c
 

Re: [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged.

2020-11-24 Thread Christian König

Am 24.11.20 um 18:11 schrieb Luben Tuikov:

On 2020-11-24 2:50 a.m., Christian König wrote:

Am 24.11.20 um 02:12 schrieb Luben Tuikov:

On 2020-11-23 3:06 a.m., Christian König wrote:

Am 23.11.20 um 06:37 schrieb Andrey Grodzovsky:

On 11/22/20 6:57 AM, Christian König wrote:

Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:

No point to try recovery if device is gone, it's meaningless.

I think that this should go into the device specific recovery
function and not in the scheduler.

The timeout timer is rearmed here, so this prevents any new recovery
work to restart from here
after drm_dev_unplug was executed from amdgpu_pci_remove.It will not
cover other places like
job cleanup or starting new job but those should stop once the
scheduler thread is stopped later.

Yeah, but this is rather unclean. We should probably return an error
code instead if the timer should be rearmed or not.

Christian, this is exactly my work I told you about
last week on Wednesday in our weekly meeting. And
which I wrote to you in an email last year about this
time.

Yeah, that's why I'm suggesting it here as well.

It seems you're suggesting that Andrey do it, while
all too well you know I've been working on this
for some time now.


Changing the return value is just a minimal change and I didn't want to 
block Andrey in any way.




I wrote you about this last year same time
in an email. And I discussed it on the Wednesday
meeting.

You could've mentioned that here the first time.


So what do we do now?

Split your patches into smaller parts and submit them chunk by chunk.

E.g. renames first and then functional changes grouped by area they change.

I have, but my final patch, a tiny one but which implements
the core reason for the change seems buggy, and I'm looking
for a way to debug it.


Just send it out in chunks, e.g. non functional changes like renames 
shouldn't cause any problems and having them in the branch early 
minimizes conflicts with work from others.


Regards,
Christian.



Regards,
Luben



Regards,
Christian.


I can submit those changes without the last part,
which builds on this change.

I'm still testing the last part and was hoping
to submit it all in one sequence of patches,
after my testing.

Regards,
Luben


Christian.


Andrey



Christian.


Signed-off-by: Andrey Grodzovsky 
---
    drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
    drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  3 ++-
    drivers/gpu/drm/lima/lima_sched.c |  3 ++-
    drivers/gpu/drm/panfrost/panfrost_job.c   |  2 +-
    drivers/gpu/drm/scheduler/sched_main.c    | 15 ++-
    drivers/gpu/drm/v3d/v3d_sched.c   | 15 ++-
    include/drm/gpu_scheduler.h   |  6 +-
    7 files changed, 35 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index d56f402..d0b0021 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -487,7 +487,7 @@ int amdgpu_fence_driver_init_ring(struct
amdgpu_ring *ring,
      r = drm_sched_init(>sched, _sched_ops,
   num_hw_submission, amdgpu_job_hang_limit,
-   timeout, ring->name);
+   timeout, ring->name, >ddev);
    if (r) {
    DRM_ERROR("Failed to create scheduler on ring %s.\n",
      ring->name);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index cd46c88..7678287 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -185,7 +185,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
      ret = drm_sched_init(>sched, _sched_ops,
     etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
- msecs_to_jiffies(500), dev_name(gpu->dev));
+ msecs_to_jiffies(500), dev_name(gpu->dev),
+ gpu->drm);
    if (ret)
    return ret;
    diff --git a/drivers/gpu/drm/lima/lima_sched.c
b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e..8a7e5d7ca 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -505,7 +505,8 @@ int lima_sched_pipe_init(struct lima_sched_pipe
*pipe, const char *name)
      return drm_sched_init(>base, _sched_ops, 1,
      lima_job_hang_limit, msecs_to_jiffies(timeout),
-  name);
+  name,
+  pipe->ldev->ddev);
    }
      void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
b/drivers/gpu/drm/panfrost/panfrost_job.c
index 30e7b71..37b03b01 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -520,7 +520,7 @@ int panfrost_job_init(struct panfrost_device
*pfdev)
    ret = drm_sched_init(>queue[j].sched,
     _sched_ops,
  

Re: [PATCH] drm/amdgpu/display: drop leftover function definition

2020-11-24 Thread Nirmoy

Reviewed-by: Nirmoy Das 

On 11/24/20 5:52 PM, Alex Deucher wrote:

No longer exists.

Fixes: fa7580010fde1b ("drm/amd/display: init soc bounding box for dcn3.01.")
Signed-off-by: Alex Deucher 
---
  drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c | 2 --
  1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
index 124ae5253d4b..7e95bd1e9e53 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
@@ -1223,8 +1223,6 @@ static const struct resource_create_funcs 
res_create_maximus_funcs = {
.create_hwseq = dcn301_hwseq_create,
  };
  
-static void dcn301_pp_smu_destroy(struct pp_smu_funcs **pp_smu);

-
  static void dcn301_destruct(struct dcn301_resource_pool *pool)
  {
unsigned int i;

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged.

2020-11-24 Thread Andrey Grodzovsky


On 11/24/20 12:11 PM, Luben Tuikov wrote:

On 2020-11-24 2:50 a.m., Christian König wrote:

Am 24.11.20 um 02:12 schrieb Luben Tuikov:

On 2020-11-23 3:06 a.m., Christian König wrote:

Am 23.11.20 um 06:37 schrieb Andrey Grodzovsky:

On 11/22/20 6:57 AM, Christian König wrote:

Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:

No point to try recovery if device is gone, it's meaningless.

I think that this should go into the device specific recovery
function and not in the scheduler.

The timeout timer is rearmed here, so this prevents any new recovery
work to restart from here
after drm_dev_unplug was executed from amdgpu_pci_remove.It will not
cover other places like
job cleanup or starting new job but those should stop once the
scheduler thread is stopped later.

Yeah, but this is rather unclean. We should probably return an error
code instead if the timer should be rearmed or not.

Christian, this is exactly my work I told you about
last week on Wednesday in our weekly meeting. And
which I wrote to you in an email last year about this
time.

Yeah, that's why I'm suggesting it here as well.

It seems you're suggesting that Andrey do it, while
all too well you know I've been working on this
for some time now.

I wrote you about this last year same time
in an email. And I discussed it on the Wednesday
meeting.

You could've mentioned that here the first time.



Luben, I actually strongly prefer that you do it and share ur patch with me 
since I don't

want to do unneeded refactoring which will conflict with with ur work. Also, 
please
usedrm-misc for this since it's not amdgpu specific work and will be easier for 
me.

Andrey





So what do we do now?

Split your patches into smaller parts and submit them chunk by chunk.

E.g. renames first and then functional changes grouped by area they change.

I have, but my final patch, a tiny one but which implements
the core reason for the change seems buggy, and I'm looking
for a way to debug it.

Regards,
Luben



Regards,
Christian.


I can submit those changes without the last part,
which builds on this change.

I'm still testing the last part and was hoping
to submit it all in one sequence of patches,
after my testing.

Regards,
Luben


Christian.


Andrey



Christian.


Signed-off-by: Andrey Grodzovsky 
---
    drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
    drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  3 ++-
    drivers/gpu/drm/lima/lima_sched.c |  3 ++-
    drivers/gpu/drm/panfrost/panfrost_job.c   |  2 +-
    drivers/gpu/drm/scheduler/sched_main.c    | 15 ++-
    drivers/gpu/drm/v3d/v3d_sched.c   | 15 ++-
    include/drm/gpu_scheduler.h   |  6 +-
    7 files changed, 35 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index d56f402..d0b0021 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -487,7 +487,7 @@ int amdgpu_fence_driver_init_ring(struct
amdgpu_ring *ring,
      r = drm_sched_init(>sched, _sched_ops,
   num_hw_submission, amdgpu_job_hang_limit,
-   timeout, ring->name);
+   timeout, ring->name, >ddev);
    if (r) {
    DRM_ERROR("Failed to create scheduler on ring %s.\n",
      ring->name);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index cd46c88..7678287 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -185,7 +185,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
      ret = drm_sched_init(>sched, _sched_ops,
     etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
- msecs_to_jiffies(500), dev_name(gpu->dev));
+ msecs_to_jiffies(500), dev_name(gpu->dev),
+ gpu->drm);
    if (ret)
    return ret;
    diff --git a/drivers/gpu/drm/lima/lima_sched.c
b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e..8a7e5d7ca 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -505,7 +505,8 @@ int lima_sched_pipe_init(struct lima_sched_pipe
*pipe, const char *name)
      return drm_sched_init(>base, _sched_ops, 1,
      lima_job_hang_limit, msecs_to_jiffies(timeout),
-  name);
+  name,
+  pipe->ldev->ddev);
    }
      void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
b/drivers/gpu/drm/panfrost/panfrost_job.c
index 30e7b71..37b03b01 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -520,7 +520,7 @@ int panfrost_job_init(struct panfrost_device
*pfdev)
    ret = drm_sched_init(>queue[j].sched,
     _sched_ops,
     1, 0, 

Re: [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged.

2020-11-24 Thread Luben Tuikov
On 2020-11-24 2:50 a.m., Christian König wrote:
> Am 24.11.20 um 02:12 schrieb Luben Tuikov:
>> On 2020-11-23 3:06 a.m., Christian König wrote:
>>> Am 23.11.20 um 06:37 schrieb Andrey Grodzovsky:
 On 11/22/20 6:57 AM, Christian König wrote:
> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>> No point to try recovery if device is gone, it's meaningless.
> I think that this should go into the device specific recovery
> function and not in the scheduler.

 The timeout timer is rearmed here, so this prevents any new recovery
 work to restart from here
 after drm_dev_unplug was executed from amdgpu_pci_remove.It will not
 cover other places like
 job cleanup or starting new job but those should stop once the
 scheduler thread is stopped later.
>>> Yeah, but this is rather unclean. We should probably return an error
>>> code instead if the timer should be rearmed or not.
>> Christian, this is exactly my work I told you about
>> last week on Wednesday in our weekly meeting. And
>> which I wrote to you in an email last year about this
>> time.
> 
> Yeah, that's why I'm suggesting it here as well.

It seems you're suggesting that Andrey do it, while
all too well you know I've been working on this
for some time now.

I wrote you about this last year same time
in an email. And I discussed it on the Wednesday
meeting.

You could've mentioned that here the first time.

> 
>> So what do we do now?
> 
> Split your patches into smaller parts and submit them chunk by chunk.
> 
> E.g. renames first and then functional changes grouped by area they change.

I have, but my final patch, a tiny one but which implements
the core reason for the change seems buggy, and I'm looking
for a way to debug it.

Regards,
Luben


> 
> Regards,
> Christian.
> 
>>
>> I can submit those changes without the last part,
>> which builds on this change.
>>
>> I'm still testing the last part and was hoping
>> to submit it all in one sequence of patches,
>> after my testing.
>>
>> Regards,
>> Luben
>>
>>> Christian.
>>>
 Andrey


> Christian.
>
>> Signed-off-by: Andrey Grodzovsky 
>> ---
>>    drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
>>    drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  3 ++-
>>    drivers/gpu/drm/lima/lima_sched.c |  3 ++-
>>    drivers/gpu/drm/panfrost/panfrost_job.c   |  2 +-
>>    drivers/gpu/drm/scheduler/sched_main.c    | 15 ++-
>>    drivers/gpu/drm/v3d/v3d_sched.c   | 15 ++-
>>    include/drm/gpu_scheduler.h   |  6 +-
>>    7 files changed, 35 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> index d56f402..d0b0021 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>> @@ -487,7 +487,7 @@ int amdgpu_fence_driver_init_ring(struct
>> amdgpu_ring *ring,
>>      r = drm_sched_init(>sched, _sched_ops,
>>   num_hw_submission, amdgpu_job_hang_limit,
>> -   timeout, ring->name);
>> +   timeout, ring->name, >ddev);
>>    if (r) {
>>    DRM_ERROR("Failed to create scheduler on ring %s.\n",
>>      ring->name);
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> index cd46c88..7678287 100644
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> @@ -185,7 +185,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
>>      ret = drm_sched_init(>sched, _sched_ops,
>>     etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
>> - msecs_to_jiffies(500), dev_name(gpu->dev));
>> + msecs_to_jiffies(500), dev_name(gpu->dev),
>> + gpu->drm);
>>    if (ret)
>>    return ret;
>>    diff --git a/drivers/gpu/drm/lima/lima_sched.c
>> b/drivers/gpu/drm/lima/lima_sched.c
>> index dc6df9e..8a7e5d7ca 100644
>> --- a/drivers/gpu/drm/lima/lima_sched.c
>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>> @@ -505,7 +505,8 @@ int lima_sched_pipe_init(struct lima_sched_pipe
>> *pipe, const char *name)
>>      return drm_sched_init(>base, _sched_ops, 1,
>>      lima_job_hang_limit, msecs_to_jiffies(timeout),
>> -  name);
>> +  name,
>> +  pipe->ldev->ddev);
>>    }
>>      void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 30e7b71..37b03b01 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ 

[PATCH] drm/amdgpu/display: drop leftover function definition

2020-11-24 Thread Alex Deucher
No longer exists.

Fixes: fa7580010fde1b ("drm/amd/display: init soc bounding box for dcn3.01.")
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c 
b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
index 124ae5253d4b..7e95bd1e9e53 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
@@ -1223,8 +1223,6 @@ static const struct resource_create_funcs 
res_create_maximus_funcs = {
.create_hwseq = dcn301_hwseq_create,
 };
 
-static void dcn301_pp_smu_destroy(struct pp_smu_funcs **pp_smu);
-
 static void dcn301_destruct(struct dcn301_resource_pool *pool)
 {
unsigned int i;
-- 
2.25.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use

2020-11-24 Thread Christian König

Am 24.11.20 um 17:22 schrieb Andrey Grodzovsky:


On 11/24/20 2:41 AM, Christian König wrote:

Am 23.11.20 um 22:08 schrieb Andrey Grodzovsky:


On 11/23/20 3:41 PM, Christian König wrote:

Am 23.11.20 um 21:38 schrieb Andrey Grodzovsky:


On 11/23/20 3:20 PM, Christian König wrote:

Am 23.11.20 um 21:05 schrieb Andrey Grodzovsky:


On 11/25/20 5:42 AM, Christian König wrote:

Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:

It's needed to drop iommu backed pages on device unplug
before device's IOMMU group is released.


It would be cleaner if we could do the whole handling in TTM. I 
also need to double check what you are doing with this function.


Christian.



Check patch "drm/amdgpu: Register IOMMU topology notifier per 
device." to see
how i use it. I don't see why this should go into TTM mid-layer 
- the stuff I do inside
is vendor specific and also I don't think TTM is explicitly 
aware of IOMMU ?
Do you mean you prefer the IOMMU notifier to be registered from 
within TTM

and then use a hook to call into vendor specific handler ?


No, that is really vendor specific.

What I meant is to have a function like 
ttm_resource_manager_evict_all() which you only need to call and 
all tt objects are unpopulated.



So instead of this BO list i create and later iterate in amdgpu 
from the IOMMU patch you just want to do it within

TTM with a single function ? Makes much more sense.


Yes, exactly.

The list_empty() checks we have in TTM for the LRU are actually not 
the best idea, we should now check the pin_count instead. This way 
we could also have a list of the pinned BOs in TTM.



So from my IOMMU topology handler I will iterate the TTM LRU for the 
unpinned BOs and this new function for the pinned ones  ?
It's probably a good idea to combine both iterations into this new 
function to cover all the BOs allocated on the device.


Yes, that's what I had in my mind as well.






BTW: Have you thought about what happens when we unpopulate a BO 
while we still try to use a kernel mapping for it? That could have 
unforeseen consequences.



Are you asking what happens to kmap or vmap style mapped CPU 
accesses once we drop all the DMA backing pages for a particular BO 
? Because for user mappings
(mmap) we took care of this with dummy page reroute but indeed 
nothing was done for in kernel CPU mappings.


Yes exactly that.

In other words what happens if we free the ring buffer while the 
kernel still writes to it?


Christian.



While we can't control user application accesses to the mapped buffers 
explicitly and hence we use page fault rerouting
I am thinking that in this  case we may be able to sprinkle 
drm_dev_enter/exit in any such sensitive place were we might

CPU access a DMA buffer from the kernel ?


Yes, I fear we are going to need that.

Things like CPU page table updates, ring buffer accesses and FW memcpy 
? Is there other places ?


Puh, good question. I have no idea.

Another point is that at this point the driver shouldn't access any 
such buffers as we are at the process finishing the device.
AFAIK there is no page fault mechanism for kernel mappings so I don't 
think there is anything else to do ?


Well there is a page fault handler for kernel mappings, but that one 
just prints the stack trace into the system log and calls BUG(); :)


Long story short we need to avoid any access to released pages after 
unplug. No matter if it's from the kernel or userspace.


Regards,
Christian.



Andrey


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/powerplay: fix spelling mistake "smu_state_memroy_block" -> "smu_state_memory_block"

2020-11-24 Thread Alex Deucher
Applied.  Thanks!

Alex

On Mon, Nov 23, 2020 at 7:42 PM Quan, Evan  wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
> Reviewed-by: Evan Quan 
>
> -Original Message-
> From: Colin King 
> Sent: Monday, November 23, 2020 6:54 PM
> To: Deucher, Alexander ; Koenig, Christian 
> ; David Airlie ; Daniel Vetter 
> ; Quan, Evan ; Wang, Kevin(Yang) 
> ; Gui, Jack ; 
> amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org
> Cc: kernel-janit...@vger.kernel.org; linux-ker...@vger.kernel.org
> Subject: [PATCH] drm/amd/powerplay: fix spelling mistake 
> "smu_state_memroy_block" -> "smu_state_memory_block"
>
> From: Colin Ian King 
>
> The struct name smu_state_memroy_block contains a spelling mistake, rename it 
> to smu_state_memory_block
>
> Fixes: 8554e67d6e22 ("drm/amd/powerplay: implement power_dpm_state sys 
> interface for SMU11")
> Signed-off-by: Colin Ian King 
> ---
>  drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h 
> b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> index 7550757cc059..a559ea2204c1 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> @@ -99,7 +99,7 @@ struct smu_state_display_block {
>  bool  enable_vari_bright;
>  };
>
> -struct smu_state_memroy_block {
> +struct smu_state_memory_block {
>  bool  dll_off;
>  uint8_t m3arb;
>  uint8_t unused[3];
> @@ -146,7 +146,7 @@ struct smu_power_state {
>  struct smu_state_validation_block validation;
>  struct smu_state_pcie_block   pcie;
>  struct smu_state_display_blockdisplay;
> -struct smu_state_memroy_block memory;
> +struct smu_state_memory_block memory;
>  struct smu_state_software_algorithm_block software;
>  struct smu_uvd_clocks uvd_clocks;
>  struct smu_hw_power_state hardware;
> --
> 2.28.0
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 40/40] drm/amd/amdgpu/gmc_v9_0: Suppy some missing function doc descriptions

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:21 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:382:23: warning: 
> ‘ecc_umc_mcumc_status_addrs’ defined but not used [-Wunused-const-variable=]
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:720: warning: Function parameter or 
> member 'vmhub' not described in 'gmc_v9_0_flush_gpu_tlb'
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:836: warning: Function parameter or 
> member 'flush_type' not described in 'gmc_v9_0_flush_gpu_tlb_pasid'
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:836: warning: Function parameter or 
> member 'all_hub' not described in 'gmc_v9_0_flush_gpu_tlb_pasid'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied with minor changes.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> index fbee43b4ba64d..a83743ab3e8bb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> @@ -675,6 +675,7 @@ static bool 
> gmc_v9_0_get_atc_vmid_pasid_mapping_info(struct amdgpu_device *adev,
>   *
>   * @adev: amdgpu_device pointer
>   * @vmid: vm instance to flush
> + * @vmhub: vmhub type
>   * @flush_type: the flush type
>   *
>   * Flush the TLB for the requested page table using certain type.
> @@ -791,6 +792,8 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device 
> *adev, uint32_t vmid,
>   *
>   * @adev: amdgpu_device pointer
>   * @pasid: pasid to be flush
> + * @flush_type: the flush type
> + * @all_hub: Used with PACKET3_INVALIDATE_TLBS_ALL_HUB()
>   *
>   * Flush the TLB for the requested pasid.
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 39/40] drm/amd/amdgpu/gmc_v9_0: Remove unused table 'ecc_umc_mcumc_status_addrs'

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:382:23: warning: 
> ‘ecc_umc_mcumc_status_addrs’ defined but not used [-Wunused-const-variable=]
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 35 ---
>  1 file changed, 35 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> index 0c3421d587e87..fbee43b4ba64d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> @@ -379,41 +379,6 @@ static const uint32_t ecc_umc_mcumc_ctrl_mask_addrs[] = {
> (0x001d43e0 + 0x1800),
>  };
>
> -static const uint32_t ecc_umc_mcumc_status_addrs[] = {
> -   (0x000143c2 + 0x),
> -   (0x000143c2 + 0x0800),
> -   (0x000143c2 + 0x1000),
> -   (0x000143c2 + 0x1800),
> -   (0x000543c2 + 0x),
> -   (0x000543c2 + 0x0800),
> -   (0x000543c2 + 0x1000),
> -   (0x000543c2 + 0x1800),
> -   (0x000943c2 + 0x),
> -   (0x000943c2 + 0x0800),
> -   (0x000943c2 + 0x1000),
> -   (0x000943c2 + 0x1800),
> -   (0x000d43c2 + 0x),
> -   (0x000d43c2 + 0x0800),
> -   (0x000d43c2 + 0x1000),
> -   (0x000d43c2 + 0x1800),
> -   (0x001143c2 + 0x),
> -   (0x001143c2 + 0x0800),
> -   (0x001143c2 + 0x1000),
> -   (0x001143c2 + 0x1800),
> -   (0x001543c2 + 0x),
> -   (0x001543c2 + 0x0800),
> -   (0x001543c2 + 0x1000),
> -   (0x001543c2 + 0x1800),
> -   (0x001943c2 + 0x),
> -   (0x001943c2 + 0x0800),
> -   (0x001943c2 + 0x1000),
> -   (0x001943c2 + 0x1800),
> -   (0x001d43c2 + 0x),
> -   (0x001d43c2 + 0x0800),
> -   (0x001d43c2 + 0x1000),
> -   (0x001d43c2 + 0x1800),
> -};
> -
>  static int gmc_v9_0_ecc_interrupt_state(struct amdgpu_device *adev,
> struct amdgpu_irq_src *src,
> unsigned type,
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 37/40] drm/amd/amdgpu/gmc_v8_0: Fix more issues attributed to copy/paste

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:21 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:618: warning: Function parameter or 
> member 'flush_type' not described in 'gmc_v8_0_flush_gpu_tlb_pasid'
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:618: warning: Function parameter or 
> member 'all_hub' not described in 'gmc_v8_0_flush_gpu_tlb_pasid'
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:657: warning: Function parameter or 
> member 'vmhub' not described in 'gmc_v8_0_flush_gpu_tlb'
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:657: warning: Function parameter or 
> member 'flush_type' not described in 'gmc_v8_0_flush_gpu_tlb'
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c:998: warning: Function parameter or 
> member 'pasid' not described in 'gmc_v8_0_vm_decode_fault'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied with minor changes.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 5 +
>  1 file changed, 5 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> index 0f32a8002c3d7..41c1d8e812b88 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> @@ -609,6 +609,8 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
>   *
>   * @adev: amdgpu_device pointer
>   * @pasid: pasid to be flush
> + * @flush_type: unused
> + * @all_hub: unused
>   *
>   * Flush the TLB for the requested pasid.
>   */
> @@ -649,6 +651,8 @@ static int gmc_v8_0_flush_gpu_tlb_pasid(struct 
> amdgpu_device *adev,
>   *
>   * @adev: amdgpu_device pointer
>   * @vmid: vm instance to flush
> + * @vmhub: unused
> + * @flush_type: unused
>   *
>   * Flush the TLB for the requested page table (VI).
>   */
> @@ -990,6 +994,7 @@ static void gmc_v8_0_gart_disable(struct amdgpu_device 
> *adev)
>   * @status: VM_CONTEXT1_PROTECTION_FAULT_STATUS register value
>   * @addr: VM_CONTEXT1_PROTECTION_FAULT_ADDR register value
>   * @mc_client: VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT register value
> + * @pasid: debug logging only - no functional use
>   *
>   * Print human readable fault information (VI).
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 36/40] drm/amd/amdgpu/gmc_v7_0: Add some missing kernel-doc descriptions

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c:433: warning: Function parameter or 
> member 'flush_type' not described in 'gmc_v7_0_flush_gpu_tlb_pasid'
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c:433: warning: Function parameter or 
> member 'all_hub' not described in 'gmc_v7_0_flush_gpu_tlb_pasid'
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c:471: warning: Function parameter or 
> member 'vmhub' not described in 'gmc_v7_0_flush_gpu_tlb'
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c:471: warning: Function parameter or 
> member 'flush_type' not described in 'gmc_v7_0_flush_gpu_tlb'
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c:771: warning: Function parameter or 
> member 'pasid' not described in 'gmc_v7_0_vm_decode_fault'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied with minor changes.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> index 80c146df338aa..fe71c89ecd26f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> @@ -424,6 +424,8 @@ static int gmc_v7_0_mc_init(struct amdgpu_device *adev)
>   *
>   * @adev: amdgpu_device pointer
>   * @pasid: pasid to be flush
> + * @flush_type: unused
> + * @all_hub: unused
>   *
>   * Flush the TLB for the requested pasid.
>   */
> @@ -463,7 +465,9 @@ static int gmc_v7_0_flush_gpu_tlb_pasid(struct 
> amdgpu_device *adev,
>   *
>   * @adev: amdgpu_device pointer
>   * @vmid: vm instance to flush
> - *
> + * @vmhub: unused
> + * @flush_type: unused
> + * *
>   * Flush the TLB for the requested page table (CIK).
>   */
>  static void gmc_v7_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
> @@ -763,6 +767,7 @@ static void gmc_v7_0_gart_disable(struct amdgpu_device 
> *adev)
>   * @status: VM_CONTEXT1_PROTECTION_FAULT_STATUS register value
>   * @addr: VM_CONTEXT1_PROTECTION_FAULT_ADDR register value
>   * @mc_client: VM_CONTEXT1_PROTECTION_FAULT_MCCLIENT register value
> + * @pasid: debug logging only - no functional use
>   *
>   * Print human readable fault information (CIK).
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use

2020-11-24 Thread Andrey Grodzovsky


On 11/24/20 2:41 AM, Christian König wrote:

Am 23.11.20 um 22:08 schrieb Andrey Grodzovsky:


On 11/23/20 3:41 PM, Christian König wrote:

Am 23.11.20 um 21:38 schrieb Andrey Grodzovsky:


On 11/23/20 3:20 PM, Christian König wrote:

Am 23.11.20 um 21:05 schrieb Andrey Grodzovsky:


On 11/25/20 5:42 AM, Christian König wrote:

Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:

It's needed to drop iommu backed pages on device unplug
before device's IOMMU group is released.


It would be cleaner if we could do the whole handling in TTM. I also 
need to double check what you are doing with this function.


Christian.



Check patch "drm/amdgpu: Register IOMMU topology notifier per device." to 
see
how i use it. I don't see why this should go into TTM mid-layer - the 
stuff I do inside

is vendor specific and also I don't think TTM is explicitly aware of IOMMU ?
Do you mean you prefer the IOMMU notifier to be registered from within TTM
and then use a hook to call into vendor specific handler ?


No, that is really vendor specific.

What I meant is to have a function like ttm_resource_manager_evict_all() 
which you only need to call and all tt objects are unpopulated.



So instead of this BO list i create and later iterate in amdgpu from the 
IOMMU patch you just want to do it within

TTM with a single function ? Makes much more sense.


Yes, exactly.

The list_empty() checks we have in TTM for the LRU are actually not the best 
idea, we should now check the pin_count instead. This way we could also have 
a list of the pinned BOs in TTM.



So from my IOMMU topology handler I will iterate the TTM LRU for the unpinned 
BOs and this new function for the pinned ones  ?
It's probably a good idea to combine both iterations into this new function 
to cover all the BOs allocated on the device.


Yes, that's what I had in my mind as well.






BTW: Have you thought about what happens when we unpopulate a BO while we 
still try to use a kernel mapping for it? That could have unforeseen 
consequences.



Are you asking what happens to kmap or vmap style mapped CPU accesses once we 
drop all the DMA backing pages for a particular BO ? Because for user mappings
(mmap) we took care of this with dummy page reroute but indeed nothing was 
done for in kernel CPU mappings.


Yes exactly that.

In other words what happens if we free the ring buffer while the kernel still 
writes to it?


Christian.



While we can't control user application accesses to the mapped buffers 
explicitly and hence we use page fault rerouting
I am thinking that in this  case we may be able to sprinkle drm_dev_enter/exit 
in any such sensitive place were we might
CPU access a DMA buffer from the kernel ? Things like CPU page table updates, 
ring buffer accesses and FW memcpy ? Is there other places ?
Another point is that at this point the driver shouldn't access any such buffers 
as we are at the process finishing the device.
AFAIK there is no page fault mechanism for kernel mappings so I don't think 
there is anything else to do ?


Andrey






Andrey




Christian.



Andrey




Give me a day or two to look into this.

Christian.



Andrey






Signed-off-by: Andrey Grodzovsky 
---
  drivers/gpu/drm/ttm/ttm_tt.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 1ccf1ef..29248a5 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -495,3 +495,4 @@ void ttm_tt_unpopulate(struct ttm_tt *ttm)
  else
  ttm_pool_unpopulate(ttm);
  }
+EXPORT_SYMBOL(ttm_tt_unpopulate);



___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9be029f26a4746347a6108d88fed299b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637417596065559955%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=tZ3do%2FeKzBtRlNaFbBjCtRvUHKdvwDZ7SoYhEBu4%2BT8%3Dreserved=0 








___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 34/40] drm/amd/amdgpu/uvd_v4_2: Add one and remove another function param description

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:448: warning: Function parameter or 
> member 'flags' not described in 'uvd_v4_2_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:448: warning: Excess function 
> parameter 'fence' description in 'uvd_v4_2_ring_emit_fence'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c 
> b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
> index 2c8c35c3bca52..bf3d1c63739b8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
> @@ -439,7 +439,7 @@ static void uvd_v4_2_stop(struct amdgpu_device *adev)
>   * @ring: amdgpu_ring pointer
>   * @addr: address
>   * @seq: sequence number
> - * @fence: fence to emit
> + * @flags: fence related flags
>   *
>   * Write a fence and a trap command to the ring.
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 33/40] drm/amd/amdgpu/cik_sdma: Add one and remove another function param description

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:282: warning: Function parameter or 
> member 'flags' not described in 'cik_sdma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:282: warning: Excess function 
> parameter 'fence' description in 'cik_sdma_ring_emit_fence'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied with minor changes.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c 
> b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> index f1e9966e7244e..28a64de8ae0e6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> @@ -271,7 +271,7 @@ static void cik_sdma_ring_emit_hdp_flush(struct 
> amdgpu_ring *ring)
>   * @ring: amdgpu ring pointer
>   * @addr: address
>   * @seq: sequence number
> - * @fence: amdgpu fence object
> + * @flags: fence related flags
>   *
>   * Add a DMA fence packet to the ring to write
>   * the fence seq number and DMA trap packet to generate
> @@ -279,7 +279,7 @@ static void cik_sdma_ring_emit_hdp_flush(struct 
> amdgpu_ring *ring)
>   */
>  static void cik_sdma_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 
> seq,
>  unsigned flags)
> -{
> +  {
> bool write64bit = flags & AMDGPU_FENCE_FLAG_64BIT;
> /* write the fence */
> amdgpu_ring_write(ring, SDMA_PACKET(SDMA_OPCODE_FENCE, 0, 0));
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 30/40] drm/amd/include/dimgrey_cavefish_ip_offset: Mark top-level IP_BASE as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
> In file included from 
> drivers/gpu/drm/amd/amdgpu/dimgrey_cavefish_reg_init.c:28:
> drivers/gpu/drm/amd/amdgpu/../include/dimgrey_cavefish_ip_offset.h:151:29: 
> warning: ‘UMC_BASE’ defined but not used [-Wunused-const-variable=]
> 151 | static const struct IP_BASE UMC_BASE = { { { { 0x00014000, 0x02425800, 
> 0, 0, 0, 0 } },
> | ^~~~
> drivers/gpu/drm/amd/amdgpu/../include/dimgrey_cavefish_ip_offset.h:81:29: 
> warning: ‘FUSE_BASE’ defined but not used [-Wunused-const-variable=]
> 81 | static const struct IP_BASE FUSE_BASE = { { { { 0x00017400, 0x02401400, 
> 0, 0, 0, 0 } },
> | ^
> drivers/gpu/drm/amd/amdgpu/../include/dimgrey_cavefish_ip_offset.h:74:29: 
> warning: ‘DPCS_BASE’ defined but not used [-Wunused-const-variable=]
> 74 | static const struct IP_BASE DPCS_BASE = { { { { 0x0012, 0x00C0, 
> 0x34C0, 0x9000, 0x02403C00, 0 } },
> | ^
>
> NB: Snipped lots of these
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Tao Zhou 
> Cc: Hawking Zhang 
> Cc: Jiansong Chen 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/dimgrey_cavefish_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/dimgrey_cavefish_ip_offset.h 
> b/drivers/gpu/drm/amd/include/dimgrey_cavefish_ip_offset.h
> index b41263de8a9b6..f84996a73de94 100644
> --- a/drivers/gpu/drm/amd/include/dimgrey_cavefish_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/dimgrey_cavefish_ip_offset.h
> @@ -33,7 +33,7 @@ struct IP_BASE_INSTANCE
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE = { { { { 0x0C00, 0x02408C00, 0, 
> 0, 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 29/40] drm/amd/include/vangogh_ip_offset: Mark top-level IP_BASE as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  In file included from drivers/gpu/drm/amd/amdgpu/vangogh_reg_init.c:28:
>  drivers/gpu/drm/amd/amdgpu/../include/vangogh_ip_offset.h:210:29: warning: 
> ‘USB_BASE’ defined but not used [-Wunused-const-variable=]
>  210 | static const struct IP_BASE USB_BASE = { { { { 0x0242A800, 0x05B0, 
> 0, 0, 0, 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/vangogh_ip_offset.h:202:29: warning: 
> ‘UMC_BASE’ defined but not used [-Wunused-const-variable=]
>  202 | static const struct IP_BASE UMC_BASE = { { { { 0x00014000, 0x02425800, 
> 0, 0, 0, 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/vangogh_ip_offset.h:178:29: warning: 
> ‘PCIE0_BASE’ defined but not used [-Wunused-const-variable=]
>  178 | static const struct IP_BASE PCIE0_BASE = { { { { 0x, 
> 0x0014, 0x0D20, 0x00010400, 0x0241B000, 0x0404 } },
>  | ^~
>  drivers/gpu/drm/amd/amdgpu/../include/vangogh_ip_offset.h:154:29: warning: 
> ‘MP2_BASE’ defined but not used [-Wunused-const-variable=]
>  154 | static const struct IP_BASE MP2_BASE = { { { { 0x00016400, 0x02400800, 
> 0x00F4, 0x00F8, 0x00FC, 0 } },
>  | ^~~~
>
> NB: Snipped lots of these
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Huang Rui 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/vangogh_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/vangogh_ip_offset.h 
> b/drivers/gpu/drm/amd/include/vangogh_ip_offset.h
> index 2875574b060e6..691073ed780ec 100644
> --- a/drivers/gpu/drm/amd/include/vangogh_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/vangogh_ip_offset.h
> @@ -36,7 +36,7 @@ struct IP_BASE_INSTANCE
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ACP_BASE = { { { { 0x02403800, 0x0048, 0, 0, 
> 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 28/40] drm/amd/include/sienna_cichlid_ip_offset: Mark top-level IP_BASE as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  In file included from 
> drivers/gpu/drm/amd/amdgpu/sienna_cichlid_reg_init.c:28:
>  drivers/gpu/drm/amd/amdgpu/../include/sienna_cichlid_ip_offset.h:186:29: 
> warning: ‘USB0_BASE’ defined but not used [-Wunused-const-variable=]
>  186 | static const struct IP_BASE USB0_BASE = { { { { 0x0242A800, 
> 0x05B0, 0, 0, 0 } },
>  | ^
>  drivers/gpu/drm/amd/amdgpu/../include/sienna_cichlid_ip_offset.h:179:29: 
> warning: ‘UMC_BASE’ defined but not used [-Wunused-const-variable=]
>  179 | static const struct IP_BASE UMC_BASE = { { { { 0x00014000, 0x02425800, 
> 0, 0, 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/sienna_cichlid_ip_offset.h:158:29: 
> warning: ‘SDMA1_BASE’ defined but not used [-Wunused-const-variable=]
>  158 | static const struct IP_BASE SDMA1_BASE = { { { { 0x1260, 
> 0xA000, 0x0001C000, 0x02402C00, 0 } },
>  | ^~
>
> NB: Snipped lots of these
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Hawking Zhang 
> Cc: Likun Gao 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/sienna_cichlid_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/sienna_cichlid_ip_offset.h 
> b/drivers/gpu/drm/amd/include/sienna_cichlid_ip_offset.h
> index 06800c6fa0495..b07bc2dd895dc 100644
> --- a/drivers/gpu/drm/amd/include/sienna_cichlid_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/sienna_cichlid_ip_offset.h
> @@ -33,7 +33,7 @@ struct IP_BASE_INSTANCE
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE = { { { { 0x0C00, 0x02408C00, 0, 
> 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 27/40] drm/amd/include/navi12_ip_offset: Mark top-level IP_BASE as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  In file included from drivers/gpu/drm/amd/amdgpu/navi12_reg_init.c:27:
>  drivers/gpu/drm/amd/amdgpu/../include/navi12_ip_offset.h:179:29: warning: 
> ‘USB0_BASE’ defined but not used [-Wunused-const-variable=]
>  179 | static const struct IP_BASE USB0_BASE ={ { { { 0x0242A800, 0x05B0, 
> 0, 0, 0 } },
>  | ^
>  drivers/gpu/drm/amd/amdgpu/../include/navi12_ip_offset.h:172:29: warning: 
> ‘UMC_BASE’ defined but not used [-Wunused-const-variable=]
>  172 | static const struct IP_BASE UMC_BASE ={ { { { 0x00014000, 0x02425800, 
> 0, 0, 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/navi12_ip_offset.h:151:29: warning: 
> ‘SDMA_BASE’ defined but not used [-Wunused-const-variable=]
>  151 | static const struct IP_BASE SDMA_BASE ={ { { { 0x1260, 0xA000, 
> 0x02402C00, 0, 0 } },
>  | ^
>
> NB: Snipped a few of these
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/navi12_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/navi12_ip_offset.h 
> b/drivers/gpu/drm/amd/include/navi12_ip_offset.h
> index 6c2cc6296c061..d8fc00478b6a0 100644
> --- a/drivers/gpu/drm/amd/include/navi12_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/navi12_ip_offset.h
> @@ -33,7 +33,7 @@ struct IP_BASE_INSTANCE
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE ={ { { { 0x0C00, 0x02408C00, 0, 
> 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 26/40] drm/amd/include/navi14_ip_offset: Mark top-level IP_BASE as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  In file included from drivers/gpu/drm/amd/amdgpu/navi14_reg_init.c:27:
>  drivers/gpu/drm/amd/amdgpu/../include/navi14_ip_offset.h:179:29: warning: 
> ‘USB0_BASE’ defined but not used [-Wunused-const-variable=]
>  179 | static const struct IP_BASE USB0_BASE ={ { { { 0x0242A800, 0x05B0, 
> 0, 0, 0 } },
>  | ^
>  drivers/gpu/drm/amd/amdgpu/../include/navi14_ip_offset.h:172:29: warning: 
> ‘UMC_BASE’ defined but not used [-Wunused-const-variable=]
>  172 | static const struct IP_BASE UMC_BASE ={ { { { 0x00014000, 0x02425800, 
> 0, 0, 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/navi14_ip_offset.h:151:29: warning: 
> ‘SDMA_BASE’ defined but not used [-Wunused-const-variable=]
>  151 | static const struct IP_BASE SDMA_BASE ={ { { { 0x1260, 0xA000, 
> 0x02402C00, 0, 0 } },
>  | ^
>
> NB: Snipped a few of these
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/navi14_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/navi14_ip_offset.h 
> b/drivers/gpu/drm/amd/include/navi14_ip_offset.h
> index ecdd9eabe0dc8..c39ef651adc6f 100644
> --- a/drivers/gpu/drm/amd/include/navi14_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/navi14_ip_offset.h
> @@ -33,7 +33,7 @@ struct IP_BASE_INSTANCE
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE ={ { { { 0x0C00, 0x02408C00, 0, 
> 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 25/40] drm/amd/include/arct_ip_offset: Mark top-level IP_BASE definition as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  In file included from drivers/gpu/drm/amd/amdgpu/arct_reg_init.c:27:
>  drivers/gpu/drm/amd/amdgpu/../include/arct_ip_offset.h:227:29: warning: 
> ‘DBGU_IO_BASE’ defined but not used [-Wunused-const-variable=]
>  227 | static const struct IP_BASE DBGU_IO_BASE ={ { { { 0x01E0, 
> 0x000125A0, 0x0040B400, 0, 0, 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/arct_ip_offset.h:127:29: warning: 
> ‘PCIE0_BASE’ defined but not used [-Wunused-const-variable=]
>  127 | static const struct IP_BASE PCIE0_BASE ={ { { { 0x000128C0, 
> 0x00411800, 0x0444, 0, 0, 0 } },
>  | ^~
>  In file included from drivers/gpu/drm/amd/amdgpu/arct_reg_init.c:27:
>  drivers/gpu/drm/amd/amdgpu/../include/arct_ip_offset.h:63:29: warning: 
> ‘FUSE_BASE’ defined but not used [-Wunused-const-variable=]
>  63 | static const struct IP_BASE FUSE_BASE ={ { { { 0x000120A0, 0x00017400, 
> 0x00401400, 0, 0, 0 } },
>  | ^
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/arct_ip_offset.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/include/arct_ip_offset.h 
> b/drivers/gpu/drm/amd/include/arct_ip_offset.h
> index a7791a9e1f905..af1c46991429b 100644
> --- a/drivers/gpu/drm/amd/include/arct_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/arct_ip_offset.h
> @@ -28,12 +28,12 @@
>  struct IP_BASE_INSTANCE
>  {
>  unsigned int segment[MAX_SEGMENT];
> -};
> +} __maybe_unused;
>
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE={ { { { 0x0C20, 
> 0x00012460, 0x00408C00, 0, 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 24/40] drm/amd/include/navi10_ip_offset: Mark top-level IP_BASE as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  In file included from drivers/gpu/drm/amd/amdgpu/navi10_reg_init.c:27:
>  drivers/gpu/drm/amd/amdgpu/../include/navi10_ip_offset.h:127:29: warning: 
> ‘UMC_BASE’ defined but not used [-Wunused-const-variable=]
>  127 | static const struct IP_BASE UMC_BASE ={ { { { 0x00014000, 0, 0, 0, 0, 
> 0 } },
>  | ^~~~
>  drivers/gpu/drm/amd/amdgpu/../include/navi10_ip_offset.h:109:29: warning: 
> ‘RSMU_BASE’ defined but not used [-Wunused-const-variable=]
>  109 | static const struct IP_BASE RSMU_BASE = { { { { 0x00012000, 0, 0, 0, 
> 0, 0 } },
>  | ^
>  drivers/gpu/drm/amd/amdgpu/../include/navi10_ip_offset.h:61:29: warning: 
> ‘FUSE_BASE’ defined but not used [-Wunused-const-variable=]
>  61 | static const struct IP_BASE FUSE_BASE ={ { { { 0x00017400, 0, 0, 0, 0, 
> 0 } },
>  | ^
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/navi10_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/navi10_ip_offset.h 
> b/drivers/gpu/drm/amd/include/navi10_ip_offset.h
> index d4a9ddc7782ff..d6824bb6139db 100644
> --- a/drivers/gpu/drm/amd/include/navi10_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/navi10_ip_offset.h
> @@ -31,7 +31,7 @@ struct IP_BASE_INSTANCE {
>
>  struct IP_BASE {
> struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE={ { { { 0x0C00, 0, 0, 
> 0, 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 23/40] drm/amd/include/vega20_ip_offset: Mark top-level IP_BASE definition as __maybe_unused

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
>  In file included from drivers/gpu/drm/amd/amdgpu/vega20_reg_init.c:27:
>  drivers/gpu/drm/amd/amdgpu/../include/vega20_ip_offset.h:154:29: warning: 
> ‘XDMA_BASE’ defined but not used [-Wunused-const-variable=
>  154 | static const struct IP_BASE XDMA_BASE ={ { { { 0x3400, 0, 0, 0, 0, 
> 0 } },
>  | ^
>  drivers/gpu/drm/amd/amdgpu/../include/vega20_ip_offset.h:63:29: warning: 
> ‘FUSE_BASE’ defined but not used [-Wunused-const-variable=]
>  63 | static const struct IP_BASE FUSE_BASE ={ { { { 0x00017400, 0, 0, 0, 0, 
> 0 } },
>  | ^
>
> Fixes the following W=1 kernel build warning(s):
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/include/vega20_ip_offset.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/include/vega20_ip_offset.h 
> b/drivers/gpu/drm/amd/include/vega20_ip_offset.h
> index 2a2a9cc8bedb6..1deb68f3d3341 100644
> --- a/drivers/gpu/drm/amd/include/vega20_ip_offset.h
> +++ b/drivers/gpu/drm/amd/include/vega20_ip_offset.h
> @@ -33,7 +33,7 @@ struct IP_BASE_INSTANCE
>  struct IP_BASE
>  {
>  struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
> -};
> +} __maybe_unused;
>
>
>  static const struct IP_BASE ATHUB_BASE={ { { { 0x0C20, 0, 0, 
> 0, 0, 0 } },
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 22/40] drm/amd/amdgpu/dce_v6_0: Fix formatting and missing parameter description issues

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c:192: warning: Function parameter or 
> member 'async' not described in 'dce_v6_0_page_flip'
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c:1050: warning: Cannot understand  *
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Luben Tuikov 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/dce_v6_0.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
> index 9439763493464..83a88385b7620 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
> @@ -180,6 +180,7 @@ static void dce_v6_0_pageflip_interrupt_fini(struct 
> amdgpu_device *adev)
>   * @adev: amdgpu_device pointer
>   * @crtc_id: crtc to cleanup pageflip on
>   * @crtc_base: new address of the crtc (GPU MC address)
> + * @async: asynchronous flip
>   *
>   * Does the actual pageflip (evergreen+).
>   * During vblank we take the crtc lock and wait for the update_pending
> @@ -1047,7 +1048,6 @@ static u32 dce_v6_0_line_buffer_adjust(struct 
> amdgpu_device *adev,
>
>
>  /**
> - *
>   * dce_v6_0_bandwidth_update - program display watermarks
>   *
>   * @adev: amdgpu_device pointer
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 21/40] drm/amd/amdgpu/uvd_v3_1: Fix-up some documentation issues

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:91: warning: Function parameter or 
> member 'job' not described in 'uvd_v3_1_ring_emit_ib'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:91: warning: Function parameter or 
> member 'flags' not described in 'uvd_v3_1_ring_emit_ib'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:108: warning: Function parameter or 
> member 'addr' not described in 'uvd_v3_1_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:108: warning: Function parameter or 
> member 'seq' not described in 'uvd_v3_1_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:108: warning: Function parameter or 
> member 'flags' not described in 'uvd_v3_1_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:108: warning: Excess function 
> parameter 'fence' description in 'uvd_v3_1_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:625: warning: Function parameter or 
> member 'handle' not described in 'uvd_v3_1_hw_init'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:625: warning: Excess function 
> parameter 'adev' description in 'uvd_v3_1_hw_init'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:692: warning: Function parameter or 
> member 'handle' not described in 'uvd_v3_1_hw_fini'
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c:692: warning: Excess function 
> parameter 'adev' description in 'uvd_v3_1_hw_fini'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sonny Jiang 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c | 10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c 
> b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
> index 7cf4b11a65c5c..143ba7a41f41f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
> @@ -80,7 +80,9 @@ static void uvd_v3_1_ring_set_wptr(struct amdgpu_ring *ring)
>   * uvd_v3_1_ring_emit_ib - execute indirect buffer
>   *
>   * @ring: amdgpu_ring pointer
> + * @job: unused
>   * @ib: indirect buffer to execute
> + * @flags: unused
>   *
>   * Write ring commands to execute the indirect buffer
>   */
> @@ -99,7 +101,9 @@ static void uvd_v3_1_ring_emit_ib(struct amdgpu_ring *ring,
>   * uvd_v3_1_ring_emit_fence - emit an fence & trap command
>   *
>   * @ring: amdgpu_ring pointer
> - * @fence: fence to emit
> + * @addr: address
> + * @seq: sequence number
> + * @flags: fence related flags
>   *
>   * Write a fence and a trap command to the ring.
>   */
> @@ -617,7 +621,7 @@ static void uvd_v3_1_enable_mgcg(struct amdgpu_device 
> *adev,
>  /**
>   * uvd_v3_1_hw_init - start and test UVD block
>   *
> - * @adev: amdgpu_device pointer
> + * @handle: handle used to pass amdgpu_device pointer
>   *
>   * Initialize the hardware, boot up the VCPU and do some testing
>   */
> @@ -684,7 +688,7 @@ static int uvd_v3_1_hw_init(void *handle)
>  /**
>   * uvd_v3_1_hw_fini - stop the hardware block
>   *
> - * @adev: amdgpu_device pointer
> + * @handle: handle used to pass amdgpu_device pointer
>   *
>   * Stop the UVD block, mark ring as not ready any more
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 17/40] drm/amd/amdgpu/gfx_v6_0: Supply description for 'gfx_v6_0_ring_test_ib()'s 'timeout' param

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c:1903: warning: Function parameter or 
> member 'timeout' not described in 'gfx_v6_0_ring_test_ib'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
> index 671c46ebeced9..ca74638dec9b7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
> @@ -1894,6 +1894,7 @@ static void gfx_v6_0_ring_emit_ib(struct amdgpu_ring 
> *ring,
>   * gfx_v6_0_ring_test_ib - basic ring IB test
>   *
>   * @ring: amdgpu_ring structure holding ring information
> + * @timeout: timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>   *
>   * Allocate an IB and execute it on the gfx ring (SI).
>   * Provides a basic gfx ring test to verify that IBs are working.
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 16/40] drm/amd/amdgpu/si_dma: Fix a bunch of function documentation issues

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:92: warning: Function parameter or 
> member 'addr' not described in 'si_dma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:92: warning: Function parameter or 
> member 'seq' not described in 'si_dma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:92: warning: Function parameter or 
> member 'flags' not described in 'si_dma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:92: warning: Excess function parameter 
> 'fence' description in 'si_dma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:252: warning: Function parameter or 
> member 'timeout' not described in 'si_dma_ring_test_ib'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:408: warning: Function parameter or 
> member 'ring' not described in 'si_dma_ring_pad_ib'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:446: warning: Function parameter or 
> member 'vmid' not described in 'si_dma_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:446: warning: Function parameter or 
> member 'pd_addr' not described in 'si_dma_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:446: warning: Excess function parameter 
> 'vm' description in 'si_dma_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:781: warning: Function parameter or 
> member 'ib' not described in 'si_dma_emit_copy_buffer'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:781: warning: Function parameter or 
> member 'tmz' not described in 'si_dma_emit_copy_buffer'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:781: warning: Excess function parameter 
> 'ring' description in 'si_dma_emit_copy_buffer'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:804: warning: Function parameter or 
> member 'ib' not described in 'si_dma_emit_fill_buffer'
>  drivers/gpu/drm/amd/amdgpu/si_dma.c:804: warning: Excess function parameter 
> 'ring' description in 'si_dma_emit_fill_buffer'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied with minor changes.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/si_dma.c | 14 ++
>  1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c 
> b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> index 7d2bbcbe547b2..540dced190f33 100644
> --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> @@ -81,7 +81,9 @@ static void si_dma_ring_emit_ib(struct amdgpu_ring *ring,
>   * si_dma_ring_emit_fence - emit a fence on the DMA ring
>   *
>   * @ring: amdgpu ring pointer
> - * @fence: amdgpu fence object
> + * @addr: address
> + * @seq: sequence number
> + * @flags: fence related flags
>   *
>   * Add a DMA fence packet to the ring to write
>   * the fence seq number and DMA trap packet to generate
> @@ -244,6 +246,7 @@ static int si_dma_ring_test_ring(struct amdgpu_ring *ring)
>   * si_dma_ring_test_ib - test an IB on the DMA engine
>   *
>   * @ring: amdgpu_ring structure holding ring information
> + * @timeout: timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>   *
>   * Test a simple IB in the DMA ring (VI).
>   * Returns 0 on success, error on failure.
> @@ -401,6 +404,7 @@ static void si_dma_vm_set_pte_pde(struct amdgpu_ib *ib,
>  /**
>   * si_dma_pad_ib - pad the IB to the required number of dw
>   *
> + * @ring: amdgpu_ring pointer
>   * @ib: indirect buffer to fill with padding
>   *
>   */
> @@ -436,7 +440,8 @@ static void si_dma_ring_emit_pipeline_sync(struct 
> amdgpu_ring *ring)
>   * si_dma_ring_emit_vm_flush - cik vm flush using sDMA
>   *
>   * @ring: amdgpu_ring pointer
> - * @vm: amdgpu_vm pointer
> + * @vmid: vmid number to use
> + * @pd_addr: address
>   *
>   * Update the page table base and flush the VM TLB
>   * using sDMA (VI).
> @@ -764,10 +769,11 @@ static void si_dma_set_irq_funcs(struct amdgpu_device 
> *adev)
>  /**
>   * si_dma_emit_copy_buffer - copy buffer using the sDMA engine
>   *
> - * @ring: amdgpu_ring structure holding ring information
> + * @ib: indirect buffer to copy to
>   * @src_offset: src GPU address
>   * @dst_offset: dst GPU address
>   * @byte_count: number of bytes to xfer
> + * @tmz: unused
>   *
>   * Copy GPU buffers using the DMA engine (VI).
>   * Used by the amdgpu ttm implementation to move pages if
> @@ -790,7 +796,7 @@ static void si_dma_emit_copy_buffer(struct amdgpu_ib *ib,
>  /**
>   * si_dma_emit_fill_buffer - fill buffer using the sDMA engine
>   *
> - * @ring: amdgpu_ring structure holding ring information
> + * @ib: indirect buffer to copy to
>   * @src_data: value to write to buffer
>   * @dst_offset: dst GPU address
>   * @byte_count: number of bytes to xfer
> --
> 2.25.1
>
> ___
> dri-devel mailing list

Re: [PATCH 09/40] drm/amd/amdgpu/gfx_v7_0: Clean-up a bunch of kernel-doc related issues

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:1590: warning: Function parameter or 
> member 'instance' not described in 'gfx_v7_0_select_se_sh'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:1788: warning: Excess function 
> parameter 'se_num' description in 'gfx_v7_0_setup_rb'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:1788: warning: Excess function 
> parameter 'sh_per_se' description in 'gfx_v7_0_setup_rb'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:1852: warning: Excess function 
> parameter 'adev' description in 'DEFAULT_SH_MEM_BASES'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2086: warning: Excess function 
> parameter 'adev' description in 'gfx_v7_0_ring_test_ring'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2130: warning: Function parameter or 
> member 'ring' not described in 'gfx_v7_0_ring_emit_hdp_flush'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2130: warning: Excess function 
> parameter 'adev' description in 'gfx_v7_0_ring_emit_hdp_flush'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2130: warning: Excess function 
> parameter 'ridx' description in 'gfx_v7_0_ring_emit_hdp_flush'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2182: warning: Function parameter or 
> member 'ring' not described in 'gfx_v7_0_ring_emit_fence_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2182: warning: Function parameter or 
> member 'addr' not described in 'gfx_v7_0_ring_emit_fence_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2182: warning: Function parameter or 
> member 'seq' not described in 'gfx_v7_0_ring_emit_fence_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2182: warning: Function parameter or 
> member 'flags' not described in 'gfx_v7_0_ring_emit_fence_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2182: warning: Excess function 
> parameter 'adev' description in 'gfx_v7_0_ring_emit_fence_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2182: warning: Excess function 
> parameter 'fence' description in 'gfx_v7_0_ring_emit_fence_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2224: warning: Function parameter or 
> member 'ring' not described in 'gfx_v7_0_ring_emit_fence_compute'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2224: warning: Function parameter or 
> member 'addr' not described in 'gfx_v7_0_ring_emit_fence_compute'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2224: warning: Function parameter or 
> member 'seq' not described in 'gfx_v7_0_ring_emit_fence_compute'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2224: warning: Function parameter or 
> member 'flags' not described in 'gfx_v7_0_ring_emit_fence_compute'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2224: warning: Excess function 
> parameter 'adev' description in 'gfx_v7_0_ring_emit_fence_compute'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2224: warning: Excess function 
> parameter 'fence' description in 'gfx_v7_0_ring_emit_fence_compute'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2260: warning: Function parameter or 
> member 'job' not described in 'gfx_v7_0_ring_emit_ib_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2260: warning: Function parameter or 
> member 'flags' not described in 'gfx_v7_0_ring_emit_ib_gfx'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:2351: warning: Function parameter or 
> member 'timeout' not described in 'gfx_v7_0_ring_test_ib'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:3244: warning: Function parameter or 
> member 'ring' not described in 'gfx_v7_0_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:3244: warning: Function parameter or 
> member 'vmid' not described in 'gfx_v7_0_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:3244: warning: Function parameter or 
> member 'pd_addr' not described in 'gfx_v7_0_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c:3244: warning: Excess function 
> parameter 'adev' description in 'gfx_v7_0_ring_emit_vm_flush'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 33 +++
>  1 file changed, 19 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> index 04e1e92f5f3cf..f2490f915a8be 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> @@ -1580,10 +1580,10 @@ static void gfx_v7_0_tiling_mode_table_init(struct 
> amdgpu_device *adev)
>   * @adev: amdgpu_device pointer
>   * @se_num: shader engine to address
>   * @sh_num: sh block to address
> + * @instance: Certain registers are instanced per SE or SH.
> + *0x means broadcast to all SEs or SHs (CIK).
>   *
> - * Select which SE, SH combinations to address. Certain
> - * registers are 

Re: [PATCH 08/40] drm/amd/amdgpu/cik_sdma: Supply some missing function param descriptions

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:226: warning: Function parameter or 
> member 'job' not described in 'cik_sdma_ring_emit_ib'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:226: warning: Function parameter or 
> member 'flags' not described in 'cik_sdma_ring_emit_ib'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:278: warning: Function parameter or 
> member 'addr' not described in 'cik_sdma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:278: warning: Function parameter or 
> member 'seq' not described in 'cik_sdma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:278: warning: Function parameter or 
> member 'flags' not described in 'cik_sdma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:278: warning: Excess function 
> parameter 'fence' description in 'cik_sdma_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:663: warning: Function parameter or 
> member 'timeout' not described in 'cik_sdma_ring_test_ib'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:808: warning: Function parameter or 
> member 'ring' not described in 'cik_sdma_ring_pad_ib'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:859: warning: Function parameter or 
> member 'vmid' not described in 'cik_sdma_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:859: warning: Function parameter or 
> member 'pd_addr' not described in 'cik_sdma_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:859: warning: Excess function 
> parameter 'vm' description in 'cik_sdma_ring_emit_vm_flush'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:1315: warning: Function parameter or 
> member 'ib' not described in 'cik_sdma_emit_copy_buffer'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:1315: warning: Function parameter or 
> member 'tmz' not described in 'cik_sdma_emit_copy_buffer'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:1315: warning: Excess function 
> parameter 'ring' description in 'cik_sdma_emit_copy_buffer'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:1339: warning: Function parameter or 
> member 'ib' not described in 'cik_sdma_emit_fill_buffer'
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c:1339: warning: Excess function 
> parameter 'ring' description in 'cik_sdma_emit_fill_buffer'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied with minor changes.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 14 +++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c 
> b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> index 1a6494ea50912..f1e9966e7244e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> @@ -215,7 +215,9 @@ static void cik_sdma_ring_insert_nop(struct amdgpu_ring 
> *ring, uint32_t count)
>   * cik_sdma_ring_emit_ib - Schedule an IB on the DMA engine
>   *
>   * @ring: amdgpu ring pointer
> + * @job: job to retrive vmid from
>   * @ib: IB object to schedule
> + * @flags: unused
>   *
>   * Schedule an IB in the DMA ring (CIK).
>   */
> @@ -267,6 +269,8 @@ static void cik_sdma_ring_emit_hdp_flush(struct 
> amdgpu_ring *ring)
>   * cik_sdma_ring_emit_fence - emit a fence on the DMA ring
>   *
>   * @ring: amdgpu ring pointer
> + * @addr: address
> + * @seq: sequence number
>   * @fence: amdgpu fence object
>   *
>   * Add a DMA fence packet to the ring to write
> @@ -655,6 +659,7 @@ static int cik_sdma_ring_test_ring(struct amdgpu_ring 
> *ring)
>   * cik_sdma_ring_test_ib - test an IB on the DMA engine
>   *
>   * @ring: amdgpu_ring structure holding ring information
> + * @timeout: timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>   *
>   * Test a simple IB in the DMA ring (CIK).
>   * Returns 0 on success, error on failure.
> @@ -801,6 +806,7 @@ static void cik_sdma_vm_set_pte_pde(struct amdgpu_ib *ib, 
> uint64_t pe,
>  /**
>   * cik_sdma_vm_pad_ib - pad the IB to the required number of dw
>   *
> + * @ring: amdgpu_ring structure holding ring information
>   * @ib: indirect buffer to fill with padding
>   *
>   */
> @@ -849,7 +855,8 @@ static void cik_sdma_ring_emit_pipeline_sync(struct 
> amdgpu_ring *ring)
>   * cik_sdma_ring_emit_vm_flush - cik vm flush using sDMA
>   *
>   * @ring: amdgpu_ring pointer
> - * @vm: amdgpu_vm pointer
> + * @vmid: vmid number to use
> + * @pd_addr: address
>   *
>   * Update the page table base and flush the VM TLB
>   * using sDMA (CIK).
> @@ -1298,10 +1305,11 @@ static void cik_sdma_set_irq_funcs(struct 
> amdgpu_device *adev)
>  /**
>   * cik_sdma_emit_copy_buffer - copy buffer using the sDMA engine
>   *
> - * @ring: amdgpu_ring structure holding ring information
> + * @ib: indirect buffer to copy to
>   * @src_offset: src GPU 

Re: [PATCH 4/4] drm/amd/display: don't expose rotation prop for cursor plane

2020-11-24 Thread Simon Ser
On Tuesday, November 24, 2020 4:46 PM, Alex Deucher  
wrote:

> On Fri, Nov 20, 2020 at 3:19 PM Simon Ser cont...@emersion.fr wrote:
>
> > Setting any rotation on the cursor plane is ignored by amdgpu.
> > Because of DCE/DCN design, it's not possible to rotate the cursor.
> > Instead of displaying the wrong result, stop advertising the rotation
> > property for the cursor plane.
> > Now that we check all cursor plane properties in amdgpu_dm_atomic_check,
> > remove the TODO.
> > Signed-off-by: Simon Ser cont...@emersion.fr
> > Cc: Alex Deucher alexander.deuc...@amd.com
> > Cc: Harry Wentland hwent...@amd.com
> > Cc: Nicholas Kazlauskas nicholas.kazlaus...@amd.com
>
> Applied the series. Thanks!

Thanks a lot of the review!

Simon
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 07/40] drm/amd/amdgpu/dce_v8_0: Supply description for 'async'

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c:185: warning: Function parameter or 
> member 'async' not described in 'dce_v8_0_page_flip'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Luben Tuikov 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/dce_v8_0.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c 
> b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
> index 7973183fa335e..224b30214427f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
> @@ -176,6 +176,7 @@ static void dce_v8_0_pageflip_interrupt_fini(struct 
> amdgpu_device *adev)
>   * @adev: amdgpu_device pointer
>   * @crtc_id: crtc to cleanup pageflip on
>   * @crtc_base: new address of the crtc (GPU MC address)
> + * @async: asynchronous flip
>   *
>   * Triggers the actual pageflip by updating the primary
>   * surface base address.
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 06/40] drm/amd/amdgpu/uvd_v4_2: Fix some kernel-doc misdemeanours

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:157: warning: Function parameter or 
> member 'handle' not described in 'uvd_v4_2_hw_init'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:157: warning: Excess function 
> parameter 'adev' description in 'uvd_v4_2_hw_init'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:212: warning: Function parameter or 
> member 'handle' not described in 'uvd_v4_2_hw_fini'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:212: warning: Excess function 
> parameter 'adev' description in 'uvd_v4_2_hw_fini'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:446: warning: Function parameter or 
> member 'addr' not described in 'uvd_v4_2_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:446: warning: Function parameter or 
> member 'seq' not described in 'uvd_v4_2_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:446: warning: Function parameter or 
> member 'flags' not described in 'uvd_v4_2_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:446: warning: Excess function 
> parameter 'fence' description in 'uvd_v4_2_ring_emit_fence'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:513: warning: Function parameter or 
> member 'job' not described in 'uvd_v4_2_ring_emit_ib'
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c:513: warning: Function parameter or 
> member 'flags' not described in 'uvd_v4_2_ring_emit_ib'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied with minor modifications.

Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c 
> b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
> index b0c0c438fc93c..2c8c35c3bca52 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
> @@ -149,7 +149,7 @@ static void uvd_v4_2_enable_mgcg(struct amdgpu_device 
> *adev,
>  /**
>   * uvd_v4_2_hw_init - start and test UVD block
>   *
> - * @adev: amdgpu_device pointer
> + * @handle: handle used to pass amdgpu_device pointer
>   *
>   * Initialize the hardware, boot up the VCPU and do some testing
>   */
> @@ -204,7 +204,7 @@ static int uvd_v4_2_hw_init(void *handle)
>  /**
>   * uvd_v4_2_hw_fini - stop the hardware block
>   *
> - * @adev: amdgpu_device pointer
> + * @handle: handle used to pass amdgpu_device pointer
>   *
>   * Stop the UVD block, mark ring as not ready any more
>   */
> @@ -437,6 +437,8 @@ static void uvd_v4_2_stop(struct amdgpu_device *adev)
>   * uvd_v4_2_ring_emit_fence - emit an fence & trap command
>   *
>   * @ring: amdgpu_ring pointer
> + * @addr: address
> + * @seq: sequence number
>   * @fence: fence to emit
>   *
>   * Write a fence and a trap command to the ring.
> @@ -502,7 +504,9 @@ static int uvd_v4_2_ring_test_ring(struct amdgpu_ring 
> *ring)
>   * uvd_v4_2_ring_emit_ib - execute indirect buffer
>   *
>   * @ring: amdgpu_ring pointer
> + * @job: unused
>   * @ib: indirect buffer to execute
> + * @flags: unused
>   *
>   * Write ring commands to execute the indirect buffer
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amd/display: Extends Tune min clk for MPO for RV

2020-11-24 Thread Pratik Vishwakarma
[Why]
Changes in video resolution during playback cause
dispclk to ramp higher but sets incompatile fclk
and dcfclk values for MPO.

[How]
Check for MPO and set proper min clk values
for this case also. This was missed during previous
patch.

Signed-off-by: Pratik Vishwakarma 
---
 .../display/dc/clk_mgr/dcn10/rv1_clk_mgr.c| 19 ---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c 
b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
index 75b8240ed059..ed087a9e73bb 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c
@@ -275,9 +275,22 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base,
if (pp_smu->set_hard_min_fclk_by_freq &&
pp_smu->set_hard_min_dcfclk_by_freq &&
pp_smu->set_min_deep_sleep_dcfclk) {
-   pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu, 
new_clocks->fclk_khz / 1000);
-   pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu, 
new_clocks->dcfclk_khz / 1000);
-   pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu, 
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   // Only increase clocks when display is active and MPO 
is enabled
+   if (display_count && is_mpo_enabled(context)) {
+   
pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu,
+   ((new_clocks->fclk_khz / 1000) 
*  101) / 100);
+   
pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu,
+   ((new_clocks->dcfclk_khz / 
1000) * 101) / 100);
+   
pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   } else {
+   
pp_smu->set_hard_min_fclk_by_freq(_smu->pp_smu,
+   new_clocks->fclk_khz / 1000);
+   
pp_smu->set_hard_min_dcfclk_by_freq(_smu->pp_smu,
+   new_clocks->dcfclk_khz / 1000);
+   
pp_smu->set_min_deep_sleep_dcfclk(_smu->pp_smu,
+   
(new_clocks->dcfclk_deep_sleep_khz + 999) / 1000);
+   }
}
}
 
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 04/40] drm/amd/amdgpu/amdgpu_virt: Correct possible copy/paste or doc-rot misnaming issue

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:20 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:115: warning: Function parameter or 
> member 'adev' not described in 'amdgpu_virt_request_full_gpu'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:115: warning: Excess function 
> parameter 'amdgpu' description in 'amdgpu_virt_request_full_gpu'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:138: warning: Function parameter or 
> member 'adev' not described in 'amdgpu_virt_release_full_gpu'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:138: warning: Excess function 
> parameter 'amdgpu' description in 'amdgpu_virt_release_full_gpu'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:159: warning: Function parameter or 
> member 'adev' not described in 'amdgpu_virt_reset_gpu'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:159: warning: Excess function 
> parameter 'amdgpu' description in 'amdgpu_virt_reset_gpu'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:194: warning: Function parameter or 
> member 'adev' not described in 'amdgpu_virt_wait_reset'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:194: warning: Excess function 
> parameter 'amdgpu' description in 'amdgpu_virt_wait_reset'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:210: warning: Function parameter or 
> member 'adev' not described in 'amdgpu_virt_alloc_mm_table'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:210: warning: Excess function 
> parameter 'amdgpu' description in 'amdgpu_virt_alloc_mm_table'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:239: warning: Function parameter or 
> member 'adev' not described in 'amdgpu_virt_free_mm_table'
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c:239: warning: Excess function 
> parameter 'amdgpu' description in 'amdgpu_virt_free_mm_table'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> index 905b85391e64a..462c5dd8ca72c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> @@ -106,7 +106,7 @@ void amdgpu_virt_kiq_reg_write_reg_wait(struct 
> amdgpu_device *adev,
>
>  /**
>   * amdgpu_virt_request_full_gpu() - request full gpu access
> - * @amdgpu:amdgpu device.
> + * @adev:  amdgpu device.
>   * @init:  is driver init time.
>   * When start to init/fini driver, first need to request full gpu access.
>   * Return: Zero if request success, otherwise will return error.
> @@ -129,7 +129,7 @@ int amdgpu_virt_request_full_gpu(struct amdgpu_device 
> *adev, bool init)
>
>  /**
>   * amdgpu_virt_release_full_gpu() - release full gpu access
> - * @amdgpu:amdgpu device.
> + * @adev:  amdgpu device.
>   * @init:  is driver init time.
>   * When finishing driver init/fini, need to release full gpu access.
>   * Return: Zero if release success, otherwise will returen error.
> @@ -151,7 +151,7 @@ int amdgpu_virt_release_full_gpu(struct amdgpu_device 
> *adev, bool init)
>
>  /**
>   * amdgpu_virt_reset_gpu() - reset gpu
> - * @amdgpu:amdgpu device.
> + * @adev:  amdgpu device.
>   * Send reset command to GPU hypervisor to reset GPU that VM is using
>   * Return: Zero if reset success, otherwise will return error.
>   */
> @@ -186,7 +186,7 @@ void amdgpu_virt_request_init_data(struct amdgpu_device 
> *adev)
>
>  /**
>   * amdgpu_virt_wait_reset() - wait for reset gpu completed
> - * @amdgpu:amdgpu device.
> + * @adev:  amdgpu device.
>   * Wait for GPU reset completed.
>   * Return: Zero if reset success, otherwise will return error.
>   */
> @@ -202,7 +202,7 @@ int amdgpu_virt_wait_reset(struct amdgpu_device *adev)
>
>  /**
>   * amdgpu_virt_alloc_mm_table() - alloc memory for mm table
> - * @amdgpu:amdgpu device.
> + * @adev:  amdgpu device.
>   * MM table is used by UVD and VCE for its initialization
>   * Return: Zero if allocate success.
>   */
> @@ -232,7 +232,7 @@ int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev)
>
>  /**
>   * amdgpu_virt_free_mm_table() - free mm table memory
> - * @amdgpu:amdgpu device.
> + * @adev:  amdgpu device.
>   * Free MM table memory
>   */
>  void amdgpu_virt_free_mm_table(struct amdgpu_device *adev)
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 01/40] drm/radeon/radeon_device: Consume our own header where the prototypes are located

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/radeon/radeon_device.c:637:6: warning: no previous prototype 
> for ‘radeon_device_is_virtual’ [-Wmissing-prototypes]
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/radeon/radeon_device.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
> b/drivers/gpu/drm/radeon/radeon_device.c
> index 7f384ffe848a7..ad572f965190b 100644
> --- a/drivers/gpu/drm/radeon/radeon_device.c
> +++ b/drivers/gpu/drm/radeon/radeon_device.c
> @@ -42,6 +42,7 @@
>  #include 
>  #include 
>
> +#include "radeon_device.h"
>  #include "radeon_reg.h"
>  #include "radeon.h"
>  #include "atom.h"
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late

2020-11-24 Thread Andrey Grodzovsky



On 11/24/20 9:53 AM, Daniel Vetter wrote:

On Sat, Nov 21, 2020 at 12:21:18AM -0500, Andrey Grodzovsky wrote:

Some of the stuff in amdgpu_device_fini such as HW interrupts
disable and pending fences finilization must be done right away on
pci_remove while most of the stuff which relates to finilizing and
releasing driver data structures can be kept until
drm_driver.release hook is called, i.e. when the last device
reference is dropped.


Uh fini_late and fini_early are rathare meaningless namings, since no
clear why there's a split. If you used drm_connector_funcs as inspiration,
that's kinda not good because 'register' itself is a reserved keyword.
That's why we had to add late_ prefix, could as well have used
C_sucks_ as prefix :-) And then the early_unregister for consistency.

I think fini_hw and fini_sw (or maybe fini_drm) would be a lot clearer
about what they're doing.

I still strongly recommend that you cut over as much as possible of the
fini_hw work to devm_ and for the fini_sw/drm stuff there's drmm_
-Daniel



Definitely, and I put it in a TODO list in the RFC patch.Also, as I mentioned 
before -
I just prefer to leave it for a follow up work because it's non trivial and 
requires shuffling
a lof of stuff around in the driver. I was thinking of committing the work in 
incremental steps -

so it's easier to merge it and control for breakages.

Andrey





Signed-off-by: Andrey Grodzovsky 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  6 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 16 
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  7 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 15 ++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 24 +++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h|  1 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c| 12 +++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c|  3 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h   |  3 ++-
  9 files changed, 65 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 83ac06a..6243f6d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1063,7 +1063,9 @@ static inline struct amdgpu_device 
*amdgpu_ttm_adev(struct ttm_bo_device *bdev)
  
  int amdgpu_device_init(struct amdgpu_device *adev,

   uint32_t flags);
-void amdgpu_device_fini(struct amdgpu_device *adev);
+void amdgpu_device_fini_early(struct amdgpu_device *adev);
+void amdgpu_device_fini_late(struct amdgpu_device *adev);
+
  int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev);
  
  void amdgpu_device_vram_access(struct amdgpu_device *adev, loff_t pos,

@@ -1275,6 +1277,8 @@ void amdgpu_driver_lastclose_kms(struct drm_device *dev);
  int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file 
*file_priv);
  void amdgpu_driver_postclose_kms(struct drm_device *dev,
 struct drm_file *file_priv);
+void amdgpu_driver_release_kms(struct drm_device *dev);
+
  int amdgpu_device_ip_suspend(struct amdgpu_device *adev);
  int amdgpu_device_suspend(struct drm_device *dev, bool fbcon);
  int amdgpu_device_resume(struct drm_device *dev, bool fbcon);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 2f60b70..797d94d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3557,14 +3557,12 @@ int amdgpu_device_init(struct amdgpu_device *adev,
   * Tear down the driver info (all asics).
   * Called at driver shutdown.
   */
-void amdgpu_device_fini(struct amdgpu_device *adev)
+void amdgpu_device_fini_early(struct amdgpu_device *adev)
  {
dev_info(adev->dev, "amdgpu: finishing device.\n");
flush_delayed_work(>delayed_init_work);
adev->shutdown = true;
  
-	kfree(adev->pci_state);

-
/* make sure IB test finished before entering exclusive mode
 * to avoid preemption on IB test
 * */
@@ -3581,11 +3579,18 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
else
drm_atomic_helper_shutdown(adev_to_drm(adev));
}
-   amdgpu_fence_driver_fini(adev);
+   amdgpu_fence_driver_fini_early(adev);
if (adev->pm_sysfs_en)
amdgpu_pm_sysfs_fini(adev);
amdgpu_fbdev_fini(adev);
+
+   amdgpu_irq_fini_early(adev);
+}
+
+void amdgpu_device_fini_late(struct amdgpu_device *adev)
+{
amdgpu_device_ip_fini(adev);
+   amdgpu_fence_driver_fini_late(adev);
release_firmware(adev->firmware.gpu_info_fw);
adev->firmware.gpu_info_fw = NULL;
adev->accel_working = false;
@@ -3621,6 +3626,9 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
amdgpu_pmu_fini(adev);
if (adev->mman.discovery_bin)
amdgpu_discovery_fini(adev);
+
+   

Re: [PATCH 05/40] drm/amd/amdgpu/cik_ih: Supply description for 'ih' in 'cik_ih_{get, set}_wptr()'

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/cik_ih.c:189: warning: Function parameter or 
> member 'ih' not described in 'cik_ih_get_wptr'
>  drivers/gpu/drm/amd/amdgpu/cik_ih.c:274: warning: Function parameter or 
> member 'ih' not described in 'cik_ih_set_rptr'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Qinglang Miao 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/cik_ih.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_ih.c 
> b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
> index db953e95f3d27..d3745711d55f9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_ih.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
> @@ -177,6 +177,7 @@ static void cik_ih_irq_disable(struct amdgpu_device *adev)
>   * cik_ih_get_wptr - get the IH ring buffer wptr
>   *
>   * @adev: amdgpu_device pointer
> + * @ih: IH ring buffer to fetch wptr
>   *
>   * Get the IH ring buffer wptr from either the register
>   * or the writeback memory buffer (CIK).  Also check for
> @@ -266,6 +267,7 @@ static void cik_ih_decode_iv(struct amdgpu_device *adev,
>   * cik_ih_set_rptr - set the IH ring buffer rptr
>   *
>   * @adev: amdgpu_device pointer
> + * @ih: IH ring buffer to set wptr
>   *
>   * Set the IH ring buffer rptr.
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 03/40] drm/amd/amdgpu/amdgpu_ib: Provide docs for 'amdgpu_ib_schedule()'s 'job' param

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c:127: warning: Function parameter or 
> member 'job' not described in 'amdgpu_ib_schedule'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index c69af9b86cc60..024d0a563a652 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -106,6 +106,7 @@ void amdgpu_ib_free(struct amdgpu_device *adev, struct 
> amdgpu_ib *ib,
>   * @ring: ring index the IB is associated with
>   * @num_ibs: number of IBs to schedule
>   * @ibs: IB objects to schedule
> + * @job: job to schedule
>   * @f: fence created during this submission
>   *
>   * Schedule an IB on the associated ring (all asics).
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 02/40] drm/amd/amdgpu/amdgpu_ttm: Add description for 'page_flags'

2020-11-24 Thread Alex Deucher
On Mon, Nov 23, 2020 at 6:19 AM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c:1214: warning: Function parameter or 
> member 'page_flags' not described in 'amdgpu_ttm_tt_create'
>
> Cc: Alex Deucher 
> Cc: "Christian König" 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: Jerome Glisse 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 

Applied.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 5fcdd67e5a913..debbcef961dd5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1199,6 +1199,7 @@ static void amdgpu_ttm_backend_destroy(struct 
> ttm_bo_device *bdev,
>   * amdgpu_ttm_tt_create - Create a ttm_tt object for a given BO
>   *
>   * @bo: The buffer object to create a GTT ttm_tt object around
> + * @page_flags: Page flags to be added to the ttm_tt object
>   *
>   * Called by ttm_tt_create().
>   */
> --
> 2.25.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/4] drm/amd/display: don't expose rotation prop for cursor plane

2020-11-24 Thread Alex Deucher
On Fri, Nov 20, 2020 at 3:19 PM Simon Ser  wrote:
>
> Setting any rotation on the cursor plane is ignored by amdgpu.
> Because of DCE/DCN design, it's not possible to rotate the cursor.
> Instead of displaying the wrong result, stop advertising the rotation
> property for the cursor plane.
>
> Now that we check all cursor plane properties in amdgpu_dm_atomic_check,
> remove the TODO.
>
> Signed-off-by: Simon Ser 
> Cc: Alex Deucher 
> Cc: Harry Wentland 
> Cc: Nicholas Kazlauskas 

Applied the series.  Thanks!

Alex


> ---
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 2542571a8993..3283e22241d7 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -6592,7 +6592,8 @@ static int amdgpu_dm_plane_init(struct 
> amdgpu_display_manager *dm,
> DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_90 |
> DRM_MODE_ROTATE_180 | DRM_MODE_ROTATE_270;
>
> -   if (dm->adev->asic_type >= CHIP_BONAIRE)
> +   if (dm->adev->asic_type >= CHIP_BONAIRE &&
> +   plane->type != DRM_PLANE_TYPE_CURSOR)
> drm_plane_create_rotation_property(plane, DRM_MODE_ROTATE_0,
>supported_rotations);
>
> @@ -8887,7 +,6 @@ static int dm_update_plane_state(struct dc *dc,
> dm_new_plane_state = to_dm_plane_state(new_plane_state);
> dm_old_plane_state = to_dm_plane_state(old_plane_state);
>
> -   /*TODO Implement better atomic check for cursor plane */
> if (plane->type == DRM_PLANE_TYPE_CURSOR) {
> if (!enable || !new_plane_crtc ||
> drm_atomic_plane_disabling(plane->state, 
> new_plane_state))
> --
> 2.29.2
>
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu/dce_virtual: Enable vBlank control for vf

2020-11-24 Thread Zhang, Hawking
[AMD Public Use]

Reviewed-by: Hawking Zhang 

Regards,
Hawking
-Original Message-
From: amd-gfx  On Behalf Of Liu, Shaoyun
Sent: Tuesday, November 24, 2020 23:32
To: amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu/dce_virtual: Enable vBlank control for vf

[AMD Official Use Only - Internal Distribution Only]

Ping

-Original Message-
From: Liu, Shaoyun  
Sent: Monday, November 23, 2020 12:25 PM
To: amd-gfx@lists.freedesktop.org
Cc: Liu, Shaoyun 
Subject: [PATCH] drm/amdgpu/dce_virtual: Enable vBlank control for vf

This function actually control the vblank on/off. It shouldn't be bypassed 
for VF. Otherwise all the vblank based feature on VF will not work.

Signed-off-by: shaoyunl 
Change-Id: I77c6f57bb0af390b61f0049c12bf425b10d70d91
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index b4d4b76538d2..ffcc64ec6473 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -139,9 +139,6 @@ static void dce_virtual_crtc_dpms(struct drm_crtc *crtc, 
int mode)
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
unsigned type;
 
-   if (amdgpu_sriov_vf(adev))
-   return;
-
switch (mode) {
case DRM_MODE_DPMS_ON:
amdgpu_crtc->enabled = true;
-- 
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=04%7C01%7Chawking.zhang%40amd.com%7Cd95fd29821cd4d595e8808d8908e2936%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637418287547631145%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=%2FyMdkrzE6RMI6QqjLYbwho%2BzMzScKw4HYY7t8bBv5QU%3Dreserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu/dce_virtual: Enable vBlank control for vf

2020-11-24 Thread Liu, Shaoyun
[AMD Official Use Only - Internal Distribution Only]

Ping

-Original Message-
From: Liu, Shaoyun  
Sent: Monday, November 23, 2020 12:25 PM
To: amd-gfx@lists.freedesktop.org
Cc: Liu, Shaoyun 
Subject: [PATCH] drm/amdgpu/dce_virtual: Enable vBlank control for vf

This function actually control the vblank on/off. It shouldn't be bypassed 
for VF. Otherwise all the vblank based feature on VF will not work.

Signed-off-by: shaoyunl 
Change-Id: I77c6f57bb0af390b61f0049c12bf425b10d70d91
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c 
b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index b4d4b76538d2..ffcc64ec6473 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -139,9 +139,6 @@ static void dce_virtual_crtc_dpms(struct drm_crtc *crtc, 
int mode)
struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
unsigned type;
 
-   if (amdgpu_sriov_vf(adev))
-   return;
-
switch (mode) {
case DRM_MODE_DPMS_ON:
amdgpu_crtc->enabled = true;
-- 
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late

2020-11-24 Thread Daniel Vetter
On Sat, Nov 21, 2020 at 12:21:18AM -0500, Andrey Grodzovsky wrote:
> Some of the stuff in amdgpu_device_fini such as HW interrupts
> disable and pending fences finilization must be done right away on
> pci_remove while most of the stuff which relates to finilizing and
> releasing driver data structures can be kept until
> drm_driver.release hook is called, i.e. when the last device
> reference is dropped.
> 

Uh fini_late and fini_early are rathare meaningless namings, since no
clear why there's a split. If you used drm_connector_funcs as inspiration,
that's kinda not good because 'register' itself is a reserved keyword.
That's why we had to add late_ prefix, could as well have used
C_sucks_ as prefix :-) And then the early_unregister for consistency.

I think fini_hw and fini_sw (or maybe fini_drm) would be a lot clearer
about what they're doing.

I still strongly recommend that you cut over as much as possible of the
fini_hw work to devm_ and for the fini_sw/drm stuff there's drmm_
-Daniel

> Signed-off-by: Andrey Grodzovsky 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  6 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 16 
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  7 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 15 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c| 24 +++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h|  1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c| 12 +++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c|  3 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h   |  3 ++-
>  9 files changed, 65 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 83ac06a..6243f6d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1063,7 +1063,9 @@ static inline struct amdgpu_device 
> *amdgpu_ttm_adev(struct ttm_bo_device *bdev)
>  
>  int amdgpu_device_init(struct amdgpu_device *adev,
>  uint32_t flags);
> -void amdgpu_device_fini(struct amdgpu_device *adev);
> +void amdgpu_device_fini_early(struct amdgpu_device *adev);
> +void amdgpu_device_fini_late(struct amdgpu_device *adev);
> +
>  int amdgpu_gpu_wait_for_idle(struct amdgpu_device *adev);
>  
>  void amdgpu_device_vram_access(struct amdgpu_device *adev, loff_t pos,
> @@ -1275,6 +1277,8 @@ void amdgpu_driver_lastclose_kms(struct drm_device 
> *dev);
>  int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file 
> *file_priv);
>  void amdgpu_driver_postclose_kms(struct drm_device *dev,
>struct drm_file *file_priv);
> +void amdgpu_driver_release_kms(struct drm_device *dev);
> +
>  int amdgpu_device_ip_suspend(struct amdgpu_device *adev);
>  int amdgpu_device_suspend(struct drm_device *dev, bool fbcon);
>  int amdgpu_device_resume(struct drm_device *dev, bool fbcon);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 2f60b70..797d94d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3557,14 +3557,12 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   * Tear down the driver info (all asics).
>   * Called at driver shutdown.
>   */
> -void amdgpu_device_fini(struct amdgpu_device *adev)
> +void amdgpu_device_fini_early(struct amdgpu_device *adev)
>  {
>   dev_info(adev->dev, "amdgpu: finishing device.\n");
>   flush_delayed_work(>delayed_init_work);
>   adev->shutdown = true;
>  
> - kfree(adev->pci_state);
> -
>   /* make sure IB test finished before entering exclusive mode
>* to avoid preemption on IB test
>* */
> @@ -3581,11 +3579,18 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
>   else
>   drm_atomic_helper_shutdown(adev_to_drm(adev));
>   }
> - amdgpu_fence_driver_fini(adev);
> + amdgpu_fence_driver_fini_early(adev);
>   if (adev->pm_sysfs_en)
>   amdgpu_pm_sysfs_fini(adev);
>   amdgpu_fbdev_fini(adev);
> +
> + amdgpu_irq_fini_early(adev);
> +}
> +
> +void amdgpu_device_fini_late(struct amdgpu_device *adev)
> +{
>   amdgpu_device_ip_fini(adev);
> + amdgpu_fence_driver_fini_late(adev);
>   release_firmware(adev->firmware.gpu_info_fw);
>   adev->firmware.gpu_info_fw = NULL;
>   adev->accel_working = false;
> @@ -3621,6 +3626,9 @@ void amdgpu_device_fini(struct amdgpu_device *adev)
>   amdgpu_pmu_fini(adev);
>   if (adev->mman.discovery_bin)
>   amdgpu_discovery_fini(adev);
> +
> + kfree(adev->pci_state);
> +
>  }
>  
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 7f98cf1..3d130fc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -1244,14 +1244,10 @@ 

Re: [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug

2020-11-24 Thread Daniel Vetter
On Sat, Nov 21, 2020 at 12:21:20AM -0500, Andrey Grodzovsky wrote:
> Avoids NULL ptr due to kobj->sd being unset on device removal.
> 
> Signed-off-by: Andrey Grodzovsky 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c   | 4 +++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 4 +++-
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index caf828a..812e592 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -27,6 +27,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "amdgpu.h"
>  #include "amdgpu_ras.h"
> @@ -1043,7 +1044,8 @@ static int amdgpu_ras_sysfs_remove_feature_node(struct 
> amdgpu_device *adev)
>   .attrs = attrs,
>   };
>  
> - sysfs_remove_group(>dev->kobj, );
> + if (!drm_dev_is_unplugged(>ddev))
> + sysfs_remove_group(>dev->kobj, );

This looks wrong. sysfs, like any other interface, should be
unconditionally thrown out when we do the drm_dev_unregister. Whether
hotunplugged or not should matter at all. Either this isn't needed at all,
or something is wrong with the ordering here. But definitely fishy.
-Daniel

>  
>   return 0;
>  }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> index 2b7c90b..54331fc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
> @@ -24,6 +24,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "amdgpu.h"
>  #include "amdgpu_ucode.h"
> @@ -464,7 +465,8 @@ int amdgpu_ucode_sysfs_init(struct amdgpu_device *adev)
>  
>  void amdgpu_ucode_sysfs_fini(struct amdgpu_device *adev)
>  {
> - sysfs_remove_group(>dev->kobj, _attr_group);
> + if (!drm_dev_is_unplugged(>ddev))
> + sysfs_remove_group(>dev->kobj, _attr_group);
>  }
>  
>  static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
> -- 
> 2.7.4
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-24 Thread Gustavo A. R. Silva
On Mon, Nov 23, 2020 at 08:38:46PM +, Mark Brown wrote:
> On Fri, 20 Nov 2020 12:21:39 -0600, Gustavo A. R. Silva wrote:
> > This series aims to fix almost all remaining fall-through warnings in
> > order to enable -Wimplicit-fallthrough for Clang.
> > 
> > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > add multiple break/goto/return/fallthrough statements instead of just
> > letting the code fall through to the next case.
> > 
> > [...]
> 
> Applied to
> 
>https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git 
> for-next
> 
> Thanks!
> 
> [1/1] regulator: as3722: Fix fall-through warnings for Clang
>   commit: b52b417ccac4fae5b1f2ec4f1d46eb91e4493dc5

Thank you, Mark.
--
Gustavo
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-24 Thread Gustavo A. R. Silva
On Mon, Nov 23, 2020 at 04:03:45PM -0400, Jason Gunthorpe wrote:
> On Fri, Nov 20, 2020 at 12:21:39PM -0600, Gustavo A. R. Silva wrote:
> 
> >   IB/hfi1: Fix fall-through warnings for Clang
> >   IB/mlx4: Fix fall-through warnings for Clang
> >   IB/qedr: Fix fall-through warnings for Clang
> >   RDMA/mlx5: Fix fall-through warnings for Clang
> 
> I picked these four to the rdma tree, thanks

Awesome. :)

Thank you, Jason.
--
Gustavo
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug

2020-11-24 Thread Daniel Vetter
On Sat, Nov 21, 2020 at 03:16:15PM +0100, Christian König wrote:
> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
> > Invalidate all BOs CPU mappings once device is removed.
> > 
> > v3: Move the code from TTM into drm_dev_unplug
> > 
> > Signed-off-by: Andrey Grodzovsky 
> 
> Reviewed-by: Christian König 

Was wondering for a moment whether this should be in drm_dev_unregister
instead, but then it's only one part of the coin really. So

Reviewed-by: Daniel Vetter 

> 
> > ---
> >   drivers/gpu/drm/drm_drv.c | 3 +++
> >   1 file changed, 3 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> > index 13068fd..d550fd5 100644
> > --- a/drivers/gpu/drm/drm_drv.c
> > +++ b/drivers/gpu/drm/drm_drv.c
> > @@ -479,6 +479,9 @@ void drm_dev_unplug(struct drm_device *dev)
> > synchronize_srcu(_unplug_srcu);
> > drm_dev_unregister(dev);
> > +
> > +   /* Clear all CPU mappings pointing to this device */
> > +   unmap_mapping_range(dev->anon_inode->i_mapping, 0, 0, 1);
> >   }
> >   EXPORT_SYMBOL(drm_dev_unplug);
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in SRIOV for navi12

2020-11-24 Thread Yang, Stanley
[AMD Public Use]

Hi Guchun,

This is an oversight. I forgot to remove it from patch version first.

Regards,
Stanley
> -Original Message-
> From: Chen, Guchun 
> Sent: Tuesday, November 24, 2020 9:47 PM
> To: Yang, Stanley ; amd-gfx@lists.freedesktop.org;
> Chen, JingWen 
> Cc: Yang, Stanley 
> Subject: RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd
> fw in SRIOV for navi12
> 
> [AMD Public Use]
> 
> --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> @@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr
> *hwmgr)
>   unsigned long tools_size;
>   int ret;
>   struct cgs_firmware_info info = {0};
> + struct amdgpu_device *adev = hwmgr->adev;
> 
> Why add this local variable? Looks no one is using it.
> 
> Regards,
> Guchun
> 
> -Original Message-
> From: amd-gfx  On Behalf Of
> Stanley.Yang
> Sent: Tuesday, November 24, 2020 5:49 PM
> To: amd-gfx@lists.freedesktop.org; Chen, JingWen
> 
> Cc: Yang, Stanley 
> Subject: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw
> in SRIOV for navi12
> 
> The KFDTopologyTest.BasicTest will failed if skip smc, sdma, sos, ta and asd
> fw in SRIOV for vega10, so adjust above fw and skip load them in SRIOV only
> for navi12.
> 
> v2: remove unnecessary asic type check.
> 
> Signed-off-by: Stanley.Yang 
> ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c  |  3 ---
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c  |  3 ---
>  .../gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c | 13 ++---
> 
>  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c   |  2 +-
>  5 files changed, 8 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index 16b551f330a4..8309dd95aa48 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -593,9 +593,6 @@ static int sdma_v4_0_init_microcode(struct
> amdgpu_device *adev)
>   struct amdgpu_firmware_info *info = NULL;
>   const struct common_firmware_header *header = NULL;
> 
> - if (amdgpu_sriov_vf(adev))
> - return 0;
> -
>   DRM_DEBUG("\n");
> 
>   switch (adev->asic_type) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 9c72b95b7463..fad1cc394219 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -203,7 +203,7 @@ static int sdma_v5_0_init_microcode(struct
> amdgpu_device *adev)
>   const struct common_firmware_header *header = NULL;
>   const struct sdma_firmware_header_v1_0 *hdr;
> 
> - if (amdgpu_sriov_vf(adev))
> + if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_NAVI12))
>   return 0;
> 
>   DRM_DEBUG("\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index cb5a6f1437f8..5ea11a0f568f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -153,9 +153,6 @@ static int sdma_v5_2_init_microcode(struct
> amdgpu_device *adev)
>   struct amdgpu_firmware_info *info = NULL;
>   const struct common_firmware_header *header = NULL;
> 
> - if (amdgpu_sriov_vf(adev))
> - return 0;
> -
>   DRM_DEBUG("\n");
> 
>   switch (adev->asic_type) {
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> index daf122f24f23..e2192d8762a4 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
> @@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr
> *hwmgr)
>   unsigned long tools_size;
>   int ret;
>   struct cgs_firmware_info info = {0};
> + struct amdgpu_device *adev = hwmgr->adev;
> 
> - if (!amdgpu_sriov_vf((struct amdgpu_device *)hwmgr->adev)) {
> - ret = cgs_get_firmware_info(hwmgr->device,
> - CGS_UCODE_ID_SMU,
> - );
> - if (ret || !info.kptr)
> - return -EINVAL;
> - }
> + ret = cgs_get_firmware_info(hwmgr->device,
> + CGS_UCODE_ID_SMU,
> + );
> + if (ret || !info.kptr)
> + return -EINVAL;
> 
>   priv = kzalloc(sizeof(struct vega10_smumgr), GFP_KERNEL);
> 
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> index 1904df5a3e20..80c0bfaed097 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> @@ -847,7 +847,7 @@ static int smu_sw_init(void *handle)
>   

Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Daniel Vetter
On Tue, Nov 24, 2020 at 02:56:51PM +0100, Thomas Zimmermann wrote:
> Hi
> 
> Am 24.11.20 um 14:36 schrieb Christian König:
> > Am 24.11.20 um 13:15 schrieb Thomas Zimmermann:
> > > [SNIP]
> > > > > > > First I wanted to put this into
> > > > > > > drm_gem_ttm_vmap/vunmap(), but then wondered why
> > > > > > > ttm_bo_vmap() doe not acquire the lock internally?
> > > > > > > I'd expect that vmap/vunmap are close together and
> > > > > > > do not overlap for the same BO.
> > > > > > 
> > > > > > We have use cases like the following during command submission:
> > > > > > 
> > > > > > 1. lock
> > > > > > 2. map
> > > > > > 3. copy parts of the BO content somewhere else or patch
> > > > > > it with additional information
> > > > > > 4. unmap
> > > > > > 5. submit BO to the hardware
> > > > > > 6. add hardware fence to the BO to make sure it doesn't move
> > > > > > 7. unlock
> > > > > > 
> > > > > > That use case won't be possible with vmap/vunmap if we
> > > > > > move the lock/unlock into it and I hope to replace the
> > > > > > kmap/kunmap functions with them in the near term.
> > > > > > 
> > > > > > > Otherwise, acquiring the reservation lock would
> > > > > > > require another ref-counting variable or per-driver
> > > > > > > code.
> > > > > > 
> > > > > > Hui, why that? Just put this into
> > > > > > drm_gem_ttm_vmap/vunmap() helper as you initially
> > > > > > planned.
> > > > > 
> > > > > Given your example above, step one would acquire the lock,
> > > > > and step two would also acquire the lock as part of the vmap
> > > > > implementation. Wouldn't this fail (At least during unmap or
> > > > > unlock steps) ?
> > > > 
> > > > Oh, so you want to nest them? No, that is a rather bad no-go.
> > > 
> > > I don't want to nest/overlap them. My question was whether that
> > > would be required. Apparently not.
> > > 
> > > While the console's BO is being set for scanout, it's protected from
> > > movement via the pin/unpin implementation, right?
> > 
> > Yes, correct.
> > 
> > > The driver does not acquire the resv lock for longer periods. I'm
> > > asking because this would prevent any console-buffer updates while
> > > the console is being displayed.
> > 
> > Correct as well, we only hold the lock for things like command
> > submission, pinning, unpinning etc etc
> > 
> 
> Thanks for answering my questions.
> 
> > > 
> > > > 
> > > > You need to make sure that the lock is only taken from the FB
> > > > path which wants to vmap the object.
> > > > 
> > > > Why don't you lock the GEM object from the caller in the generic
> > > > FB implementation?
> > > 
> > > With the current blitter code, it breaks abstraction. if vmap/vunmap
> > > hold the lock implicitly, things would be easier.
> > 
> > Do you have a link to the code?
> 
> It's the damage blitter in the fbdev code. [1] While it flushes the shadow
> buffer into the BO, the BO has to be kept in place. I already changed it to
> lock struct drm_fb_helper.lock, but I don't think this is enough. TTM could
> still evict the BO concurrently.

So I'm not sure this is actually a problem: ttm could try to concurrently
evict the buffer we pinned into vram, and then just skip to the next one.

Plus atm generic fbdev isn't used on any chip where we really care about
that last few mb of vram being useable for command submission (well atm
there's no driver using it).

Having the buffer pinned into system memory and trying to do a concurrent
modeset that tries to pull it in is the hard failure mode. And holding
fb_helper.lock fully prevents that.

So not really clear on what failure mode you're seeing here?

> There's no recursion taking place, so I guess the reservation lock could be
> acquired/release in drm_client_buffer_vmap/vunmap(), or a separate pair of
> DRM client functions could do the locking.

Given how this "do the right locking" is a can of worms (and I think it's
worse than what you dug out already) I think the fb_helper.lock hack is
perfectly good enough.

I'm also somewhat worried that starting to use dma_resv lock in generic
code, while many helpers/drivers still have their hand-rolled locking,
will make conversion over to dma_resv needlessly more complicated.
-Daniel

> 
> Best regards
> Thomas
> 
> [1] 
> https://cgit.freedesktop.org/drm/drm-tip/tree/drivers/gpu/drm/drm_fb_helper.c?id=ac60f3f3090115d21f028bffa2dcfb67f695c4f2#n394
> 
> > 
> > Please note that the reservation lock you need to take here is part of
> > the GEM object.
> > 
> > Usually we design things in the way that the code needs to take a lock
> > which protects an object, then do some operations with the object and
> > then release the lock again.
> > 
> > Having in the lock inside the operation can be done as well, but
> > returning with it is kind of unusual design.
> > 
> > > Sorry for the noob questions. I'm still trying to understand the
> > > implications of acquiring these locks.
> > 
> > Well this is the reservation lock of the GEM object we are talking about
> > here. We 

Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Christian König

Am 24.11.20 um 14:56 schrieb Thomas Zimmermann:

Hi

Am 24.11.20 um 14:36 schrieb Christian König:

Am 24.11.20 um 13:15 schrieb Thomas Zimmermann:

[SNIP]
First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but 
then wondered why ttm_bo_vmap() doe not acquire the lock 
internally? I'd expect that vmap/vunmap are close together and 
do not overlap for the same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap 
functions with them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() 
helper as you initially planned.


Given your example above, step one would acquire the lock, and 
step two would also acquire the lock as part of the vmap 
implementation. Wouldn't this fail (At least during unmap or 
unlock steps) ?


Oh, so you want to nest them? No, that is a rather bad no-go.


I don't want to nest/overlap them. My question was whether that 
would be required. Apparently not.


While the console's BO is being set for scanout, it's protected from 
movement via the pin/unpin implementation, right?


Yes, correct.

The driver does not acquire the resv lock for longer periods. I'm 
asking because this would prevent any console-buffer updates while 
the console is being displayed.


Correct as well, we only hold the lock for things like command 
submission, pinning, unpinning etc etc




Thanks for answering my questions.





You need to make sure that the lock is only taken from the FB path 
which wants to vmap the object.


Why don't you lock the GEM object from the caller in the generic FB 
implementation?


With the current blitter code, it breaks abstraction. if vmap/vunmap 
hold the lock implicitly, things would be easier.


Do you have a link to the code?


It's the damage blitter in the fbdev code. [1] While it flushes the 
shadow buffer into the BO, the BO has to be kept in place. I already 
changed it to lock struct drm_fb_helper.lock, but I don't think this 
is enough. TTM could still evict the BO concurrently.


Yeah, that's correct.

But I still don't fully understand the problem. You just need to change 
the code like this:


    mutex_lock(_helper->lock);
    dma_resv_lock(buffer->gem->resv, NULL);

    ret = drm_client_buffer_vmap(buffer, );
    if (ret)
    goto out;

    dst = map;
    drm_fb_helper_damage_blit_real(fb_helper, clip, );

    drm_client_buffer_vunmap(buffer);

out:
    dma_resv_unlock(buffer->gem->resv);
    mutex_unlock(_helper->lock);


You could abstract that in drm_client functions as well, but I don't 
really see the value in that.


Regards,
Christian.

There's no recursion taking place, so I guess the reservation lock 
could be acquired/release in drm_client_buffer_vmap/vunmap(), or a 
separate pair of DRM client functions could do the locking.


Best regards
Thomas

[1] 
https://cgit.freedesktop.org/drm/drm-tip/tree/drivers/gpu/drm/drm_fb_helper.c?id=ac60f3f3090115d21f028bffa2dcfb67f695c4f2#n394




Please note that the reservation lock you need to take here is part 
of the GEM object.


Usually we design things in the way that the code needs to take a 
lock which protects an object, then do some operations with the 
object and then release the lock again.


Having in the lock inside the operation can be done as well, but 
returning with it is kind of unusual design.


Sorry for the noob questions. I'm still trying to understand the 
implications of acquiring these locks.


Well this is the reservation lock of the GEM object we are talking 
about here. We need to take that for a couple of different 
operations, vmap/vunmap doesn't sound like a special case to me.


Regards,
Christian.



Best regards
Thomas


___
dri-devel mailing list
dri-de...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel




___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Thomas Zimmermann

Hi

Am 24.11.20 um 14:36 schrieb Christian König:

Am 24.11.20 um 13:15 schrieb Thomas Zimmermann:

[SNIP]
First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but 
then wondered why ttm_bo_vmap() doe not acquire the lock 
internally? I'd expect that vmap/vunmap are close together and do 
not overlap for the same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap functions 
with them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper 
as you initially planned.


Given your example above, step one would acquire the lock, and step 
two would also acquire the lock as part of the vmap implementation. 
Wouldn't this fail (At least during unmap or unlock steps) ?


Oh, so you want to nest them? No, that is a rather bad no-go.


I don't want to nest/overlap them. My question was whether that would 
be required. Apparently not.


While the console's BO is being set for scanout, it's protected from 
movement via the pin/unpin implementation, right?


Yes, correct.

The driver does not acquire the resv lock for longer periods. I'm 
asking because this would prevent any console-buffer updates while the 
console is being displayed.


Correct as well, we only hold the lock for things like command 
submission, pinning, unpinning etc etc




Thanks for answering my questions.





You need to make sure that the lock is only taken from the FB path 
which wants to vmap the object.


Why don't you lock the GEM object from the caller in the generic FB 
implementation?


With the current blitter code, it breaks abstraction. if vmap/vunmap 
hold the lock implicitly, things would be easier.


Do you have a link to the code?


It's the damage blitter in the fbdev code. [1] While it flushes the 
shadow buffer into the BO, the BO has to be kept in place. I already 
changed it to lock struct drm_fb_helper.lock, but I don't think this is 
enough. TTM could still evict the BO concurrently.


There's no recursion taking place, so I guess the reservation lock could 
be acquired/release in drm_client_buffer_vmap/vunmap(), or a separate 
pair of DRM client functions could do the locking.


Best regards
Thomas

[1] 
https://cgit.freedesktop.org/drm/drm-tip/tree/drivers/gpu/drm/drm_fb_helper.c?id=ac60f3f3090115d21f028bffa2dcfb67f695c4f2#n394




Please note that the reservation lock you need to take here is part of 
the GEM object.


Usually we design things in the way that the code needs to take a lock 
which protects an object, then do some operations with the object and 
then release the lock again.


Having in the lock inside the operation can be done as well, but 
returning with it is kind of unusual design.


Sorry for the noob questions. I'm still trying to understand the 
implications of acquiring these locks.


Well this is the reservation lock of the GEM object we are talking about 
here. We need to take that for a couple of different operations, 
vmap/vunmap doesn't sound like a special case to me.


Regards,
Christian.



Best regards
Thomas


___
dri-devel mailing list
dri-de...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


OpenPGP_0x680DC11D530B7A23.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in SRIOV for navi12

2020-11-24 Thread Chen, Guchun
[AMD Public Use]

--- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
@@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr *hwmgr)
unsigned long tools_size;
int ret;
struct cgs_firmware_info info = {0};
+   struct amdgpu_device *adev = hwmgr->adev;

Why add this local variable? Looks no one is using it.

Regards,
Guchun

-Original Message-
From: amd-gfx  On Behalf Of Stanley.Yang
Sent: Tuesday, November 24, 2020 5:49 PM
To: amd-gfx@lists.freedesktop.org; Chen, JingWen 
Cc: Yang, Stanley 
Subject: [PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in 
SRIOV for navi12

The KFDTopologyTest.BasicTest will failed if skip smc, sdma, sos, ta and asd fw 
in SRIOV for vega10, so adjust above fw and skip load them in SRIOV only for 
navi12.

v2: remove unnecessary asic type check.

Signed-off-by: Stanley.Yang 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c  |  3 ---
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c  |  3 ---
 .../gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c | 13 ++---
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c   |  2 +-
 5 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 16b551f330a4..8309dd95aa48 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -593,9 +593,6 @@ static int sdma_v4_0_init_microcode(struct amdgpu_device 
*adev)
struct amdgpu_firmware_info *info = NULL;
const struct common_firmware_header *header = NULL;
 
-   if (amdgpu_sriov_vf(adev))
-   return 0;
-
DRM_DEBUG("\n");
 
switch (adev->asic_type) {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 9c72b95b7463..fad1cc394219 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -203,7 +203,7 @@ static int sdma_v5_0_init_microcode(struct amdgpu_device 
*adev)
const struct common_firmware_header *header = NULL;
const struct sdma_firmware_header_v1_0 *hdr;
 
-   if (amdgpu_sriov_vf(adev))
+   if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_NAVI12))
return 0;
 
DRM_DEBUG("\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index cb5a6f1437f8..5ea11a0f568f 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -153,9 +153,6 @@ static int sdma_v5_2_init_microcode(struct amdgpu_device 
*adev)
struct amdgpu_firmware_info *info = NULL;
const struct common_firmware_header *header = NULL;
 
-   if (amdgpu_sriov_vf(adev))
-   return 0;
-
DRM_DEBUG("\n");
 
switch (adev->asic_type) {
diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c 
b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
index daf122f24f23..e2192d8762a4 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
@@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr *hwmgr)
unsigned long tools_size;
int ret;
struct cgs_firmware_info info = {0};
+   struct amdgpu_device *adev = hwmgr->adev;

-   if (!amdgpu_sriov_vf((struct amdgpu_device *)hwmgr->adev)) {
-   ret = cgs_get_firmware_info(hwmgr->device,
-   CGS_UCODE_ID_SMU,
-   );
-   if (ret || !info.kptr)
-   return -EINVAL;
-   }
+   ret = cgs_get_firmware_info(hwmgr->device,
+   CGS_UCODE_ID_SMU,
+   );
+   if (ret || !info.kptr)
+   return -EINVAL;
 
priv = kzalloc(sizeof(struct vega10_smumgr), GFP_KERNEL);
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c 
b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index 1904df5a3e20..80c0bfaed097 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -847,7 +847,7 @@ static int smu_sw_init(void *handle)
smu->smu_dpm.dpm_level = AMD_DPM_FORCED_LEVEL_AUTO;
smu->smu_dpm.requested_dpm_level = AMD_DPM_FORCED_LEVEL_AUTO;
 
-   if (!amdgpu_sriov_vf(adev)) {
+   if (!amdgpu_sriov_vf(adev) || (adev->asic_type != CHIP_NAVI12)) {
ret = smu_init_microcode(smu);
if (ret) {
dev_err(adev->dev, "Failed to load smu firmware!\n");
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Christian König

Am 24.11.20 um 13:15 schrieb Thomas Zimmermann:

[SNIP]
First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but 
then wondered why ttm_bo_vmap() doe not acquire the lock 
internally? I'd expect that vmap/vunmap are close together and do 
not overlap for the same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap functions 
with them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper 
as you initially planned.


Given your example above, step one would acquire the lock, and step 
two would also acquire the lock as part of the vmap implementation. 
Wouldn't this fail (At least during unmap or unlock steps) ?


Oh, so you want to nest them? No, that is a rather bad no-go.


I don't want to nest/overlap them. My question was whether that would 
be required. Apparently not.


While the console's BO is being set for scanout, it's protected from 
movement via the pin/unpin implementation, right?


Yes, correct.

The driver does not acquire the resv lock for longer periods. I'm 
asking because this would prevent any console-buffer updates while the 
console is being displayed.


Correct as well, we only hold the lock for things like command 
submission, pinning, unpinning etc etc






You need to make sure that the lock is only taken from the FB path 
which wants to vmap the object.


Why don't you lock the GEM object from the caller in the generic FB 
implementation?


With the current blitter code, it breaks abstraction. if vmap/vunmap 
hold the lock implicitly, things would be easier.


Do you have a link to the code?

Please note that the reservation lock you need to take here is part of 
the GEM object.


Usually we design things in the way that the code needs to take a lock 
which protects an object, then do some operations with the object and 
then release the lock again.


Having in the lock inside the operation can be done as well, but 
returning with it is kind of unusual design.


Sorry for the noob questions. I'm still trying to understand the 
implications of acquiring these locks.


Well this is the reservation lock of the GEM object we are talking about 
here. We need to take that for a couple of different operations, 
vmap/vunmap doesn't sound like a special case to me.


Regards,
Christian.



Best regards
Thomas


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Thomas Zimmermann

Hi

Am 24.11.20 um 12:54 schrieb Christian König:

Am 24.11.20 um 12:44 schrieb Thomas Zimmermann:

Hi

Am 24.11.20 um 12:30 schrieb Christian König:

Am 24.11.20 um 10:16 schrieb Thomas Zimmermann:

Hi Christian

Am 16.11.20 um 12:28 schrieb Christian König:

Am 13.11.20 um 08:59 schrieb Thomas Zimmermann:

Hi Christian

Am 12.11.20 um 18:16 schrieb Christian König:

Am 12.11.20 um 14:21 schrieb Thomas Zimmermann:
In order to avoid eviction of vmap'ed buffers, pin them in their 
GEM
object's vmap implementation. Unpin them in the vunmap 
implementation.
This is needed to make generic fbdev support work reliably. 
Without,
the buffer object could be evicted while fbdev flushed its 
shadow buffer.


In difference to the PRIME pin/unpin functions, the vmap code 
does not
modify the BOs prime_shared_count, so a vmap-pinned BO does not 
count as

shared.

The actual pin location is not important as the vmap call returns
information on how to access the buffer. Callers that require a
specific location should explicitly pin the BO before vmapping it.

Well is the buffer supposed to be scanned out?

No, not by the fbdev helper.


Ok in this case that should work.


If yes then the pin location is actually rather important since the
hardware can only scan out from VRAM.

For relocatable BOs, fbdev uses a shadow buffer that makes all any
relocation transparent to userspace. It flushes the shadow fb into 
the

BO's memory if there are updates. The code is in
drm_fb_helper_dirty_work(). [1] During the flush operation, the vmap
call now pins the BO to wherever it is. The actual location does not
matter. It's vunmap'ed immediately afterwards.


The problem is what happens when it is prepared for scanout, but 
can't be moved to VRAM because it is vmapped?


When the shadow is never scanned out that isn't a problem, but we 
need to keep that in mind.




I'd like ask for your suggestions before sending an update for this 
patch.


After the discussion about locking in fbdev, [1] I intended to 
replace the pin call with code that acquires the reservation lock.


Yeah, that sounds like a good idea to me as well.

First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but then 
wondered why ttm_bo_vmap() doe not acquire the lock internally? I'd 
expect that vmap/vunmap are close together and do not overlap for 
the same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap functions 
with them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper as 
you initially planned.


Given your example above, step one would acquire the lock, and step 
two would also acquire the lock as part of the vmap implementation. 
Wouldn't this fail (At least during unmap or unlock steps) ?


Oh, so you want to nest them? No, that is a rather bad no-go.


I don't want to nest/overlap them. My question was whether that would be 
required. Apparently not.


While the console's BO is being set for scanout, it's protected from 
movement via the pin/unpin implementation, right? The driver does not 
acquire the resv lock for longer periods. I'm asking because this would 
prevent any console-buffer updates while the console is being displayed.




You need to make sure that the lock is only taken from the FB path which 
wants to vmap the object.


Why don't you lock the GEM object from the caller in the generic FB 
implementation?


With the current blitter code, it breaks abstraction. if vmap/vunmap 
hold the lock implicitly, things would be easier.


Sorry for the noob questions. I'm still trying to understand the 
implications of acquiring these locks.


Best regards
Thomas



Regards,
Christian.



Best regards
Thomas



Regards,
Christian.



Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/401088/?series=83918=1


Regards,
Christian.



For dma-buf sharing, the regular procedure of pin + vmap still apply.
This should always move the BO into GTT-managed memory.

Best regards
Thomas

[1]
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%23n432data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=RLauuAuXkcl0rXwWWJ%2FrKP%2BsCr2wAzU1ejGV1bnQ80w%3Dreserved=0 




Regards,
Christian.


Signed-off-by: Thomas Zimmermann 
---

Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Christian König

Am 24.11.20 um 12:44 schrieb Thomas Zimmermann:

Hi

Am 24.11.20 um 12:30 schrieb Christian König:

Am 24.11.20 um 10:16 schrieb Thomas Zimmermann:

Hi Christian

Am 16.11.20 um 12:28 schrieb Christian König:

Am 13.11.20 um 08:59 schrieb Thomas Zimmermann:

Hi Christian

Am 12.11.20 um 18:16 schrieb Christian König:

Am 12.11.20 um 14:21 schrieb Thomas Zimmermann:
In order to avoid eviction of vmap'ed buffers, pin them in their 
GEM
object's vmap implementation. Unpin them in the vunmap 
implementation.
This is needed to make generic fbdev support work reliably. 
Without,
the buffer object could be evicted while fbdev flushed its 
shadow buffer.


In difference to the PRIME pin/unpin functions, the vmap code 
does not
modify the BOs prime_shared_count, so a vmap-pinned BO does not 
count as

shared.

The actual pin location is not important as the vmap call returns
information on how to access the buffer. Callers that require a
specific location should explicitly pin the BO before vmapping it.

Well is the buffer supposed to be scanned out?

No, not by the fbdev helper.


Ok in this case that should work.


If yes then the pin location is actually rather important since the
hardware can only scan out from VRAM.

For relocatable BOs, fbdev uses a shadow buffer that makes all any
relocation transparent to userspace. It flushes the shadow fb into 
the

BO's memory if there are updates. The code is in
drm_fb_helper_dirty_work(). [1] During the flush operation, the vmap
call now pins the BO to wherever it is. The actual location does not
matter. It's vunmap'ed immediately afterwards.


The problem is what happens when it is prepared for scanout, but 
can't be moved to VRAM because it is vmapped?


When the shadow is never scanned out that isn't a problem, but we 
need to keep that in mind.




I'd like ask for your suggestions before sending an update for this 
patch.


After the discussion about locking in fbdev, [1] I intended to 
replace the pin call with code that acquires the reservation lock.


Yeah, that sounds like a good idea to me as well.

First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but then 
wondered why ttm_bo_vmap() doe not acquire the lock internally? I'd 
expect that vmap/vunmap are close together and do not overlap for 
the same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap functions 
with them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper as 
you initially planned.


Given your example above, step one would acquire the lock, and step 
two would also acquire the lock as part of the vmap implementation. 
Wouldn't this fail (At least during unmap or unlock steps) ?


Oh, so you want to nest them? No, that is a rather bad no-go.

You need to make sure that the lock is only taken from the FB path which 
wants to vmap the object.


Why don't you lock the GEM object from the caller in the generic FB 
implementation?


Regards,
Christian.



Best regards
Thomas



Regards,
Christian.



Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/401088/?series=83918=1


Regards,
Christian.



For dma-buf sharing, the regular procedure of pin + vmap still apply.
This should always move the BO into GTT-managed memory.

Best regards
Thomas

[1]
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%23n432data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=RLauuAuXkcl0rXwWWJ%2FrKP%2BsCr2wAzU1ejGV1bnQ80w%3Dreserved=0 




Regards,
Christian.


Signed-off-by: Thomas Zimmermann 
---
   drivers/gpu/drm/radeon/radeon_gem.c | 51 
+++--

   1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c
b/drivers/gpu/drm/radeon/radeon_gem.c
index d2876ce3bc9e..eaf7fc9a7b07 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -226,6 +226,53 @@ static int radeon_gem_handle_lockup(struct
radeon_device *rdev, int r)
   return r;
   }
   +static int radeon_gem_object_vmap(struct drm_gem_object *obj,
struct dma_buf_map *map)
+{
+    static const uint32_t any_domain = RADEON_GEM_DOMAIN_VRAM |
+   RADEON_GEM_DOMAIN_GTT |
+   

Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Thomas Zimmermann

Hi

Am 24.11.20 um 12:30 schrieb Christian König:

Am 24.11.20 um 10:16 schrieb Thomas Zimmermann:

Hi Christian

Am 16.11.20 um 12:28 schrieb Christian König:

Am 13.11.20 um 08:59 schrieb Thomas Zimmermann:

Hi Christian

Am 12.11.20 um 18:16 schrieb Christian König:

Am 12.11.20 um 14:21 schrieb Thomas Zimmermann:

In order to avoid eviction of vmap'ed buffers, pin them in their GEM
object's vmap implementation. Unpin them in the vunmap 
implementation.

This is needed to make generic fbdev support work reliably. Without,
the buffer object could be evicted while fbdev flushed its shadow 
buffer.


In difference to the PRIME pin/unpin functions, the vmap code does 
not
modify the BOs prime_shared_count, so a vmap-pinned BO does not 
count as

shared.

The actual pin location is not important as the vmap call returns
information on how to access the buffer. Callers that require a
specific location should explicitly pin the BO before vmapping it.

Well is the buffer supposed to be scanned out?

No, not by the fbdev helper.


Ok in this case that should work.


If yes then the pin location is actually rather important since the
hardware can only scan out from VRAM.

For relocatable BOs, fbdev uses a shadow buffer that makes all any
relocation transparent to userspace. It flushes the shadow fb into the
BO's memory if there are updates. The code is in
drm_fb_helper_dirty_work(). [1] During the flush operation, the vmap
call now pins the BO to wherever it is. The actual location does not
matter. It's vunmap'ed immediately afterwards.


The problem is what happens when it is prepared for scanout, but 
can't be moved to VRAM because it is vmapped?


When the shadow is never scanned out that isn't a problem, but we 
need to keep that in mind.




I'd like ask for your suggestions before sending an update for this 
patch.


After the discussion about locking in fbdev, [1] I intended to replace 
the pin call with code that acquires the reservation lock.


Yeah, that sounds like a good idea to me as well.

First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but then 
wondered why ttm_bo_vmap() doe not acquire the lock internally? I'd 
expect that vmap/vunmap are close together and do not overlap for the 
same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap functions with 
them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper as 
you initially planned.


Given your example above, step one would acquire the lock, and step two 
would also acquire the lock as part of the vmap implementation. Wouldn't 
this fail (At least during unmap or unlock steps) ?


Best regards
Thomas



Regards,
Christian.



Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/401088/?series=83918=1


Regards,
Christian.



For dma-buf sharing, the regular procedure of pin + vmap still apply.
This should always move the BO into GTT-managed memory.

Best regards
Thomas

[1]
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%23n432data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=RLauuAuXkcl0rXwWWJ%2FrKP%2BsCr2wAzU1ejGV1bnQ80w%3Dreserved=0 




Regards,
Christian.


Signed-off-by: Thomas Zimmermann 
---
   drivers/gpu/drm/radeon/radeon_gem.c | 51 
+++--

   1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c
b/drivers/gpu/drm/radeon/radeon_gem.c
index d2876ce3bc9e..eaf7fc9a7b07 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -226,6 +226,53 @@ static int radeon_gem_handle_lockup(struct
radeon_device *rdev, int r)
   return r;
   }
   +static int radeon_gem_object_vmap(struct drm_gem_object *obj,
struct dma_buf_map *map)
+{
+    static const uint32_t any_domain = RADEON_GEM_DOMAIN_VRAM |
+   RADEON_GEM_DOMAIN_GTT |
+   RADEON_GEM_DOMAIN_CPU;
+
+    struct radeon_bo *bo = gem_to_radeon_bo(obj);
+    int ret;
+
+    ret = radeon_bo_reserve(bo, false);
+    if (ret)
+    return ret;
+
+    /* pin buffer at its current location */
+    ret = radeon_bo_pin(bo, any_domain, NULL);
+    if (ret)
+    goto err_radeon_bo_unreserve;
+
+    ret = 

[PATCH 07/15] drm/i915: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert i915 to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: Rodrigo Vivi 
---
 drivers/gpu/drm/i915/display/intel_bios.c |  2 +-
 drivers/gpu/drm/i915/display/intel_cdclk.c| 14 ++---
 drivers/gpu/drm/i915/display/intel_csr.c  |  2 +-
 drivers/gpu/drm/i915/display/intel_dsi_vbt.c  |  2 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c|  2 +-
 drivers/gpu/drm/i915/display/intel_gmbus.c|  2 +-
 .../gpu/drm/i915/display/intel_lpe_audio.c|  5 +++--
 drivers/gpu/drm/i915/display/intel_opregion.c |  6 +++---
 drivers/gpu/drm/i915/display/intel_overlay.c  |  2 +-
 drivers/gpu/drm/i915/display/intel_panel.c|  4 ++--
 drivers/gpu/drm/i915/display/intel_quirks.c   |  2 +-
 drivers/gpu/drm/i915/display/intel_sdvo.c |  2 +-
 drivers/gpu/drm/i915/display/intel_vga.c  |  8 
 drivers/gpu/drm/i915/gem/i915_gem_phys.c  |  6 +++---
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c |  2 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c |  2 +-
 drivers/gpu/drm/i915/gt/intel_ggtt.c  | 10 +-
 drivers/gpu/drm/i915/gt/intel_ppgtt.c |  2 +-
 drivers/gpu/drm/i915/gt/intel_rc6.c   |  4 ++--
 drivers/gpu/drm/i915/gt/intel_reset.c |  6 +++---
 drivers/gpu/drm/i915/gvt/cfg_space.c  |  5 +++--
 drivers/gpu/drm/i915/gvt/firmware.c   | 10 +-
 drivers/gpu/drm/i915/gvt/gtt.c| 12 +--
 drivers/gpu/drm/i915/gvt/gvt.c|  6 +++---
 drivers/gpu/drm/i915/gvt/kvmgt.c  |  4 ++--
 drivers/gpu/drm/i915/i915_debugfs.c   |  2 +-
 drivers/gpu/drm/i915/i915_drv.c   | 20 +--
 drivers/gpu/drm/i915/i915_drv.h   |  2 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c   |  4 ++--
 drivers/gpu/drm/i915/i915_getparam.c  |  5 +++--
 drivers/gpu/drm/i915/i915_gpu_error.c |  2 +-
 drivers/gpu/drm/i915/i915_irq.c   |  6 +++---
 drivers/gpu/drm/i915/i915_pmu.c   |  5 +++--
 drivers/gpu/drm/i915/i915_suspend.c   |  4 ++--
 drivers/gpu/drm/i915/i915_switcheroo.c|  4 ++--
 drivers/gpu/drm/i915/i915_vgpu.c  |  2 +-
 drivers/gpu/drm/i915/intel_device_info.c  |  2 +-
 drivers/gpu/drm/i915/intel_region_lmem.c  |  8 
 drivers/gpu/drm/i915/intel_runtime_pm.c   |  2 +-
 drivers/gpu/drm/i915/intel_uncore.c   |  4 ++--
 .../gpu/drm/i915/selftests/mock_gem_device.c  |  1 -
 drivers/gpu/drm/i915/selftests/mock_gtt.c |  2 +-
 42 files changed, 99 insertions(+), 98 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bios.c 
b/drivers/gpu/drm/i915/display/intel_bios.c
index 4cc949b228f2..8879676372a3 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -2088,7 +2088,7 @@ bool intel_bios_is_valid_vbt(const void *buf, size_t size)
 
 static struct vbt_header *oprom_get_vbt(struct drm_i915_private *dev_priv)
 {
-   struct pci_dev *pdev = dev_priv->drm.pdev;
+   struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
void __iomem *p = NULL, *oprom;
struct vbt_header *vbt;
u16 vbt_size;
diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c 
b/drivers/gpu/drm/i915/display/intel_cdclk.c
index c449d28d0560..a6e13208dc50 100644
--- a/drivers/gpu/drm/i915/display/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
@@ -96,7 +96,7 @@ static void fixed_450mhz_get_cdclk(struct drm_i915_private 
*dev_priv,
 static void i85x_get_cdclk(struct drm_i915_private *dev_priv,
   struct intel_cdclk_config *cdclk_config)
 {
-   struct pci_dev *pdev = dev_priv->drm.pdev;
+   struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
u16 hpllcc = 0;
 
/*
@@ -138,7 +138,7 @@ static void i85x_get_cdclk(struct drm_i915_private 
*dev_priv,
 static void i915gm_get_cdclk(struct drm_i915_private *dev_priv,
 struct intel_cdclk_config *cdclk_config)
 {
-   struct pci_dev *pdev = dev_priv->drm.pdev;
+   struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
u16 gcfgc = 0;
 
pci_read_config_word(pdev, GCFGC, );
@@ -162,7 +162,7 @@ static void i915gm_get_cdclk(struct drm_i915_private 
*dev_priv,
 static void i945gm_get_cdclk(struct drm_i915_private *dev_priv,
 struct intel_cdclk_config *cdclk_config)
 {
-   struct pci_dev *pdev = dev_priv->drm.pdev;
+   struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
u16 gcfgc = 0;
 
pci_read_config_word(pdev, GCFGC, );
@@ -256,7 +256,7 @@ static unsigned int intel_hpll_vco(struct drm_i915_private 
*dev_priv)
 static void g33_get_cdclk(struct drm_i915_private *dev_priv,
  struct intel_cdclk_config *cdclk_config)
 {
-   struct pci_dev *pdev = dev_priv->drm.pdev;
+   struct 

[PATCH 02/15] drm/ast: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert ast to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/ast/ast_drv.c  |  4 ++--
 drivers/gpu/drm/ast/ast_main.c | 25 +
 drivers/gpu/drm/ast/ast_mm.c   | 17 +
 drivers/gpu/drm/ast/ast_mode.c |  5 +++--
 drivers/gpu/drm/ast/ast_post.c |  8 +---
 5 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
index 667b450606ef..ea8164e7a6dc 100644
--- a/drivers/gpu/drm/ast/ast_drv.c
+++ b/drivers/gpu/drm/ast/ast_drv.c
@@ -147,7 +147,7 @@ static int ast_drm_freeze(struct drm_device *dev)
error = drm_mode_config_helper_suspend(dev);
if (error)
return error;
-   pci_save_state(dev->pdev);
+   pci_save_state(to_pci_dev(dev->dev));
return 0;
 }
 
@@ -162,7 +162,7 @@ static int ast_drm_resume(struct drm_device *dev)
 {
int ret;
 
-   if (pci_enable_device(dev->pdev))
+   if (pci_enable_device(to_pci_dev(dev->dev)))
return -EIO;
 
ret = ast_drm_thaw(dev);
diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
index 1b13199858cb..0ac3c2039c4b 100644
--- a/drivers/gpu/drm/ast/ast_main.c
+++ b/drivers/gpu/drm/ast/ast_main.c
@@ -67,8 +67,9 @@ uint8_t ast_get_index_reg_mask(struct ast_private *ast,
 
 static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
 {
-   struct device_node *np = dev->pdev->dev.of_node;
+   struct device_node *np = dev->dev->of_node;
struct ast_private *ast = to_ast_private(dev);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
uint32_t data, jregd0, jregd1;
 
/* Defaults */
@@ -85,7 +86,7 @@ static void ast_detect_config_mode(struct drm_device *dev, 
u32 *scu_rev)
}
 
/* Not all families have a P2A bridge */
-   if (dev->pdev->device != PCI_CHIP_AST2000)
+   if (pdev->device != PCI_CHIP_AST2000)
return;
 
/*
@@ -119,6 +120,7 @@ static void ast_detect_config_mode(struct drm_device *dev, 
u32 *scu_rev)
 static int ast_detect_chip(struct drm_device *dev, bool *need_post)
 {
struct ast_private *ast = to_ast_private(dev);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
uint32_t jreg, scu_rev;
 
/*
@@ -143,19 +145,19 @@ static int ast_detect_chip(struct drm_device *dev, bool 
*need_post)
ast_detect_config_mode(dev, _rev);
 
/* Identify chipset */
-   if (dev->pdev->revision >= 0x50) {
+   if (pdev->revision >= 0x50) {
ast->chip = AST2600;
drm_info(dev, "AST 2600 detected\n");
-   } else if (dev->pdev->revision >= 0x40) {
+   } else if (pdev->revision >= 0x40) {
ast->chip = AST2500;
drm_info(dev, "AST 2500 detected\n");
-   } else if (dev->pdev->revision >= 0x30) {
+   } else if (pdev->revision >= 0x30) {
ast->chip = AST2400;
drm_info(dev, "AST 2400 detected\n");
-   } else if (dev->pdev->revision >= 0x20) {
+   } else if (pdev->revision >= 0x20) {
ast->chip = AST2300;
drm_info(dev, "AST 2300 detected\n");
-   } else if (dev->pdev->revision >= 0x10) {
+   } else if (pdev->revision >= 0x10) {
switch (scu_rev & 0x0300) {
case 0x0200:
ast->chip = AST1100;
@@ -265,7 +267,7 @@ static int ast_detect_chip(struct drm_device *dev, bool 
*need_post)
 
 static int ast_get_dram_info(struct drm_device *dev)
 {
-   struct device_node *np = dev->pdev->dev.of_node;
+   struct device_node *np = dev->dev->of_node;
struct ast_private *ast = to_ast_private(dev);
uint32_t mcr_cfg, mcr_scu_mpll, mcr_scu_strap;
uint32_t denum, num, div, ref_pll, dsel;
@@ -409,10 +411,9 @@ struct ast_private *ast_device_create(const struct 
drm_driver *drv,
return ast;
dev = >base;
 
-   dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
 
-   ast->regs = pci_iomap(dev->pdev, 1, 0);
+   ast->regs = pci_iomap(pdev, 1, 0);
if (!ast->regs)
return ERR_PTR(-EIO);
 
@@ -421,14 +422,14 @@ struct ast_private *ast_device_create(const struct 
drm_driver *drv,
 * assume the chip has MMIO enabled by default (rev 0x20
 * and higher).
 */
-   if (!(pci_resource_flags(dev->pdev, 2) & IORESOURCE_IO)) {
+   if (!(pci_resource_flags(pdev, 2) & IORESOURCE_IO)) {
drm_info(dev, "platform has no IO space, trying MMIO\n");
ast->ioregs = ast->regs + AST_IO_MM_OFFSET;
}
 
/* "map" IO regs if the above hasn't done so already */
if (!ast->ioregs) {
-   ast->ioregs = pci_iomap(dev->pdev, 2, 0);
+   ast->ioregs = pci_iomap(pdev, 2, 0);
if (!ast->ioregs)
  

[PATCH 14/15] drm/vmwgfx: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert vmwgfx to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Roland Scheidegger 
---
 drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c |  8 
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c| 27 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_fb.c |  2 +-
 3 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c 
b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
index 9a9fe10d829b..83a8d34704ea 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
@@ -1230,7 +1230,7 @@ int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man,
 
/* First, try to allocate a huge chunk of DMA memory */
size = PAGE_ALIGN(size);
-   man->map = dma_alloc_coherent(_priv->dev->pdev->dev, size,
+   man->map = dma_alloc_coherent(dev_priv->dev->dev, size,
  >handle, GFP_KERNEL);
if (man->map) {
man->using_mob = false;
@@ -1313,7 +1313,7 @@ struct vmw_cmdbuf_man *vmw_cmdbuf_man_create(struct 
vmw_private *dev_priv)
man->num_contexts = (dev_priv->capabilities & SVGA_CAP_HP_CMD_QUEUE) ?
2 : 1;
man->headers = dma_pool_create("vmwgfx cmdbuf",
-  _priv->dev->pdev->dev,
+  dev_priv->dev->dev,
   sizeof(SVGACBHeader),
   64, PAGE_SIZE);
if (!man->headers) {
@@ -1322,7 +1322,7 @@ struct vmw_cmdbuf_man *vmw_cmdbuf_man_create(struct 
vmw_private *dev_priv)
}
 
man->dheaders = dma_pool_create("vmwgfx inline cmdbuf",
-   _priv->dev->pdev->dev,
+   dev_priv->dev->dev,
sizeof(struct vmw_cmdbuf_dheader),
64, PAGE_SIZE);
if (!man->dheaders) {
@@ -1387,7 +1387,7 @@ void vmw_cmdbuf_remove_pool(struct vmw_cmdbuf_man *man)
ttm_bo_put(man->cmd_space);
man->cmd_space = NULL;
} else {
-   dma_free_coherent(>dev_priv->dev->pdev->dev,
+   dma_free_coherent(man->dev_priv->dev->dev,
  man->size, man->map, man->handle);
}
 }
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c 
b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index 216daf93022c..e63e08f5b14f 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -652,6 +652,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned 
long chipset)
enum vmw_res_type i;
bool refuse_dma = false;
char host_log[100] = {0};
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
 
dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL);
if (unlikely(!dev_priv)) {
@@ -659,7 +660,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned 
long chipset)
return -ENOMEM;
}
 
-   pci_set_master(dev->pdev);
+   pci_set_master(pdev);
 
dev_priv->dev = dev;
dev_priv->vmw_chipset = chipset;
@@ -688,9 +689,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned 
long chipset)
 
dev_priv->used_memory_size = 0;
 
-   dev_priv->io_start = pci_resource_start(dev->pdev, 0);
-   dev_priv->vram_start = pci_resource_start(dev->pdev, 1);
-   dev_priv->mmio_start = pci_resource_start(dev->pdev, 2);
+   dev_priv->io_start = pci_resource_start(pdev, 0);
+   dev_priv->vram_start = pci_resource_start(pdev, 1);
+   dev_priv->mmio_start = pci_resource_start(pdev, 2);
 
dev_priv->assume_16bpp = !!vmw_assume_16bpp;
 
@@ -840,7 +841,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned 
long chipset)
 
dev->dev_private = dev_priv;
 
-   ret = pci_request_regions(dev->pdev, "vmwgfx probe");
+   ret = pci_request_regions(pdev, "vmwgfx probe");
dev_priv->stealth = (ret != 0);
if (dev_priv->stealth) {
/**
@@ -849,7 +850,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned 
long chipset)
 
DRM_INFO("It appears like vesafb is loaded. "
 "Ignore above error if any.\n");
-   ret = pci_request_region(dev->pdev, 2, "vmwgfx stealth probe");
+   ret = pci_request_region(pdev, 2, "vmwgfx stealth probe");
if (unlikely(ret != 0)) {
DRM_ERROR("Failed reserving the SVGA MMIO resource.\n");
goto out_no_device;
@@ -857,7 +858,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned 
long chipset)
}
 
if (dev_priv->capabilities & SVGA_CAP_IRQMASK) {
-   ret = vmw_irq_install(dev, dev->pdev->irq);
+   ret = vmw_irq_install(dev, pdev->irq);
if 

[PATCH 10/15] drm/qxl: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert qxl to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.c   | 2 +-
 drivers/gpu/drm/qxl/qxl_ioctl.c | 3 ++-
 drivers/gpu/drm/qxl/qxl_irq.c   | 3 ++-
 drivers/gpu/drm/qxl/qxl_kms.c   | 1 -
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
index 6e7f16f4cec7..fb5f6a5e81d7 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.c
+++ b/drivers/gpu/drm/qxl/qxl_drv.c
@@ -163,7 +163,7 @@ DEFINE_DRM_GEM_FOPS(qxl_fops);
 
 static int qxl_drm_freeze(struct drm_device *dev)
 {
-   struct pci_dev *pdev = dev->pdev;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
struct qxl_device *qdev = to_qxl(dev);
int ret;
 
diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
index 16e1e589508e..b6075f452b9e 100644
--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
+++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
@@ -370,13 +370,14 @@ static int qxl_clientcap_ioctl(struct drm_device *dev, 
void *data,
  struct drm_file *file_priv)
 {
struct qxl_device *qdev = to_qxl(dev);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
struct drm_qxl_clientcap *param = data;
int byte, idx;
 
byte = param->index / 8;
idx = param->index % 8;
 
-   if (dev->pdev->revision < 4)
+   if (pdev->revision < 4)
return -ENOSYS;
 
if (byte >= 58)
diff --git a/drivers/gpu/drm/qxl/qxl_irq.c b/drivers/gpu/drm/qxl/qxl_irq.c
index 1ba5a702d763..ddf6588a2a38 100644
--- a/drivers/gpu/drm/qxl/qxl_irq.c
+++ b/drivers/gpu/drm/qxl/qxl_irq.c
@@ -81,6 +81,7 @@ static void qxl_client_monitors_config_work_func(struct 
work_struct *work)
 
 int qxl_irq_init(struct qxl_device *qdev)
 {
+   struct pci_dev *pdev = to_pci_dev(qdev->ddev.dev);
int ret;
 
init_waitqueue_head(>display_event);
@@ -93,7 +94,7 @@ int qxl_irq_init(struct qxl_device *qdev)
atomic_set(>irq_received_cursor, 0);
atomic_set(>irq_received_io_cmd, 0);
qdev->irq_received_error = 0;
-   ret = drm_irq_install(>ddev, qdev->ddev.pdev->irq);
+   ret = drm_irq_install(>ddev, pdev->irq);
qdev->ram_header->int_mask = QXL_INTERRUPT_MASK;
if (unlikely(ret != 0)) {
DRM_ERROR("Failed installing irq: %d\n", ret);
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
index 228e2b9198f1..4a60a52ab62e 100644
--- a/drivers/gpu/drm/qxl/qxl_kms.c
+++ b/drivers/gpu/drm/qxl/qxl_kms.c
@@ -111,7 +111,6 @@ int qxl_device_init(struct qxl_device *qdev,
 {
int r, sb;
 
-   qdev->ddev.pdev = pdev;
pci_set_drvdata(pdev, >ddev);
 
mutex_init(>gem.mutex);
-- 
2.29.2

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 11/15] drm/radeon: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert radeon to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Alex Deucher 
Cc: Christian König 
---
 drivers/gpu/drm/radeon/atombios_encoders.c|  6 +-
 drivers/gpu/drm/radeon/r100.c | 27 +++---
 drivers/gpu/drm/radeon/radeon.h   | 32 +++
 drivers/gpu/drm/radeon/radeon_atombios.c  | 89 ++-
 drivers/gpu/drm/radeon/radeon_bios.c  |  6 +-
 drivers/gpu/drm/radeon/radeon_combios.c   | 55 ++--
 drivers/gpu/drm/radeon/radeon_cs.c|  3 +-
 drivers/gpu/drm/radeon/radeon_device.c| 17 ++--
 drivers/gpu/drm/radeon/radeon_display.c   |  2 +-
 drivers/gpu/drm/radeon/radeon_drv.c   |  3 +-
 drivers/gpu/drm/radeon/radeon_fb.c|  2 +-
 drivers/gpu/drm/radeon/radeon_gem.c   |  6 +-
 drivers/gpu/drm/radeon/radeon_i2c.c   |  2 +-
 drivers/gpu/drm/radeon/radeon_irq_kms.c   |  2 +-
 drivers/gpu/drm/radeon/radeon_kms.c   | 20 ++---
 .../gpu/drm/radeon/radeon_legacy_encoders.c   |  6 +-
 drivers/gpu/drm/radeon/rs780_dpm.c|  7 +-
 17 files changed, 144 insertions(+), 141 deletions(-)

diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c 
b/drivers/gpu/drm/radeon/atombios_encoders.c
index cc5ee1b3af84..a9ae8b6c5991 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -2065,9 +2065,9 @@ atombios_apply_encoder_quirks(struct drm_encoder *encoder,
struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
 
/* Funky macbooks */
-   if ((dev->pdev->device == 0x71C5) &&
-   (dev->pdev->subsystem_vendor == 0x106b) &&
-   (dev->pdev->subsystem_device == 0x0080)) {
+   if ((rdev->pdev->device == 0x71C5) &&
+   (rdev->pdev->subsystem_vendor == 0x106b) &&
+   (rdev->pdev->subsystem_device == 0x0080)) {
if (radeon_encoder->devices & ATOM_DEVICE_LCD1_SUPPORT) {
uint32_t lvtma_bit_depth_control = 
RREG32(AVIVO_LVTMA_BIT_DEPTH_CONTROL);
 
diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index 24c8db673931..984eeb893d76 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -2611,7 +2611,6 @@ int r100_asic_reset(struct radeon_device *rdev, bool hard)
 
 void r100_set_common_regs(struct radeon_device *rdev)
 {
-   struct drm_device *dev = rdev->ddev;
bool force_dac2 = false;
u32 tmp;
 
@@ -2629,7 +2628,7 @@ void r100_set_common_regs(struct radeon_device *rdev)
 * don't report it in the bios connector
 * table.
 */
-   switch (dev->pdev->device) {
+   switch (rdev->pdev->device) {
/* RN50 */
case 0x515e:
case 0x5969:
@@ -2639,17 +2638,17 @@ void r100_set_common_regs(struct radeon_device *rdev)
case 0x5159:
case 0x515a:
/* DELL triple head servers */
-   if ((dev->pdev->subsystem_vendor == 0x1028 /* DELL */) &&
-   ((dev->pdev->subsystem_device == 0x016c) ||
-(dev->pdev->subsystem_device == 0x016d) ||
-(dev->pdev->subsystem_device == 0x016e) ||
-(dev->pdev->subsystem_device == 0x016f) ||
-(dev->pdev->subsystem_device == 0x0170) ||
-(dev->pdev->subsystem_device == 0x017d) ||
-(dev->pdev->subsystem_device == 0x017e) ||
-(dev->pdev->subsystem_device == 0x0183) ||
-(dev->pdev->subsystem_device == 0x018a) ||
-(dev->pdev->subsystem_device == 0x019a)))
+   if ((rdev->pdev->subsystem_vendor == 0x1028 /* DELL */) &&
+   ((rdev->pdev->subsystem_device == 0x016c) ||
+(rdev->pdev->subsystem_device == 0x016d) ||
+(rdev->pdev->subsystem_device == 0x016e) ||
+(rdev->pdev->subsystem_device == 0x016f) ||
+(rdev->pdev->subsystem_device == 0x0170) ||
+(rdev->pdev->subsystem_device == 0x017d) ||
+(rdev->pdev->subsystem_device == 0x017e) ||
+(rdev->pdev->subsystem_device == 0x0183) ||
+(rdev->pdev->subsystem_device == 0x018a) ||
+(rdev->pdev->subsystem_device == 0x019a)))
force_dac2 = true;
break;
}
@@ -2797,7 +2796,7 @@ void r100_vram_init_sizes(struct radeon_device *rdev)
rdev->mc.real_vram_size = 8192 * 1024;
WREG32(RADEON_CONFIG_MEMSIZE, rdev->mc.real_vram_size);
}
-   /* Fix for RN50, M6, M7 with 8/16/32(??) MBs of VRAM - 
+   /* Fix for RN50, M6, M7 with 8/16/32(??) MBs of VRAM -
 * Novell bug 204882 + along with lots of ubuntu ones
  

[PATCH 15/15] drm: Upcast struct drm_device.dev to struct pci_device; replace pdev

2020-11-24 Thread Thomas Zimmermann
We have DRM drivers based on USB, SPI and platform devices. All of them
are fine with storing their device reference in struct drm_device.dev.
PCI devices should be no exception. Therefore struct drm_device.pdev is
deprecated.

Instead upcast from struct drm_device.dev with to_pci_dev(). PCI-specific
code can use dev_is_pci() to test for a PCI device. This patch changes
the DRM core code and documentation accordingly. Struct drm_device.pdev
is being moved to legacy status.

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/drm_agpsupport.c |  9 ++---
 drivers/gpu/drm/drm_bufs.c   |  4 ++--
 drivers/gpu/drm/drm_edid.c   |  7 ++-
 drivers/gpu/drm/drm_irq.c| 12 +++-
 drivers/gpu/drm/drm_pci.c| 26 +++---
 drivers/gpu/drm/drm_vm.c |  2 +-
 include/drm/drm_device.h | 12 +---
 7 files changed, 46 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/drm_agpsupport.c b/drivers/gpu/drm/drm_agpsupport.c
index 4c7ad46fdd21..a4040fe4f4ba 100644
--- a/drivers/gpu/drm/drm_agpsupport.c
+++ b/drivers/gpu/drm/drm_agpsupport.c
@@ -103,11 +103,13 @@ int drm_agp_info_ioctl(struct drm_device *dev, void *data,
  */
 int drm_agp_acquire(struct drm_device *dev)
 {
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
+
if (!dev->agp)
return -ENODEV;
if (dev->agp->acquired)
return -EBUSY;
-   dev->agp->bridge = agp_backend_acquire(dev->pdev);
+   dev->agp->bridge = agp_backend_acquire(pdev);
if (!dev->agp->bridge)
return -ENODEV;
dev->agp->acquired = 1;
@@ -402,14 +404,15 @@ int drm_agp_free_ioctl(struct drm_device *dev, void *data,
  */
 struct drm_agp_head *drm_agp_init(struct drm_device *dev)
 {
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
struct drm_agp_head *head = NULL;
 
head = kzalloc(sizeof(*head), GFP_KERNEL);
if (!head)
return NULL;
-   head->bridge = agp_find_bridge(dev->pdev);
+   head->bridge = agp_find_bridge(pdev);
if (!head->bridge) {
-   head->bridge = agp_backend_acquire(dev->pdev);
+   head->bridge = agp_backend_acquire(pdev);
if (!head->bridge) {
kfree(head);
return NULL;
diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c
index 7a01d0918861..1da8b360b60a 100644
--- a/drivers/gpu/drm/drm_bufs.c
+++ b/drivers/gpu/drm/drm_bufs.c
@@ -325,7 +325,7 @@ static int drm_addmap_core(struct drm_device *dev, 
resource_size_t offset,
 * As we're limiting the address to 2^32-1 (or less),
 * casting it down to 32 bits is no problem, but we
 * need to point to a 64bit variable first. */
-   map->handle = dma_alloc_coherent(>pdev->dev,
+   map->handle = dma_alloc_coherent(dev->dev,
 map->size,
 >offset,
 GFP_KERNEL);
@@ -555,7 +555,7 @@ int drm_legacy_rmmap_locked(struct drm_device *dev, struct 
drm_local_map *map)
case _DRM_SCATTER_GATHER:
break;
case _DRM_CONSISTENT:
-   dma_free_coherent(>pdev->dev,
+   dma_free_coherent(dev->dev,
  map->size,
  map->handle,
  map->offset);
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 74f5a3197214..555a04ce2179 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -2075,9 +2076,13 @@ EXPORT_SYMBOL(drm_get_edid);
 struct edid *drm_get_edid_switcheroo(struct drm_connector *connector,
 struct i2c_adapter *adapter)
 {
-   struct pci_dev *pdev = connector->dev->pdev;
+   struct drm_device *dev = connector->dev;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
struct edid *edid;
 
+   if (drm_WARN_ON_ONCE(dev, !dev_is_pci(dev->dev)))
+   return NULL;
+
vga_switcheroo_lock_ddc(pdev);
edid = drm_get_edid(connector, adapter);
vga_switcheroo_unlock_ddc(pdev);
diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index 09d6e9e2e075..22986a9a593b 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -122,7 +122,7 @@ int drm_irq_install(struct drm_device *dev, int irq)
dev->driver->irq_preinstall(dev);
 
/* PCI devices require shared interrupts. */
-   if (dev->pdev)
+   if (dev_is_pci(dev->dev))
sh_flags = IRQF_SHARED;
 
ret = request_irq(irq, dev->driver->irq_handler,
@@ -140,7 +140,7 @@ int drm_irq_install(struct drm_device *dev, int irq)
if (ret < 0) {

[PATCH 12/15] drm/vboxvideo: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert vboxvideo to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Hans de Goede 
---
 drivers/gpu/drm/vboxvideo/vbox_drv.c  | 11 ++-
 drivers/gpu/drm/vboxvideo/vbox_irq.c  |  4 +++-
 drivers/gpu/drm/vboxvideo/vbox_main.c |  8 ++--
 drivers/gpu/drm/vboxvideo/vbox_ttm.c  |  7 ---
 4 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/vboxvideo/vbox_drv.c 
b/drivers/gpu/drm/vboxvideo/vbox_drv.c
index f3eac72cb46e..e534896b6cfd 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_drv.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_drv.c
@@ -51,7 +51,6 @@ static int vbox_pci_probe(struct pci_dev *pdev, const struct 
pci_device_id *ent)
if (IS_ERR(vbox))
return PTR_ERR(vbox);
 
-   vbox->ddev.pdev = pdev;
pci_set_drvdata(pdev, vbox);
mutex_init(>hw_mutex);
 
@@ -109,15 +108,16 @@ static void vbox_pci_remove(struct pci_dev *pdev)
 static int vbox_pm_suspend(struct device *dev)
 {
struct vbox_private *vbox = dev_get_drvdata(dev);
+   struct pci_dev *pdev = to_pci_dev(dev);
int error;
 
error = drm_mode_config_helper_suspend(>ddev);
if (error)
return error;
 
-   pci_save_state(vbox->ddev.pdev);
-   pci_disable_device(vbox->ddev.pdev);
-   pci_set_power_state(vbox->ddev.pdev, PCI_D3hot);
+   pci_save_state(pdev);
+   pci_disable_device(pdev);
+   pci_set_power_state(pdev, PCI_D3hot);
 
return 0;
 }
@@ -125,8 +125,9 @@ static int vbox_pm_suspend(struct device *dev)
 static int vbox_pm_resume(struct device *dev)
 {
struct vbox_private *vbox = dev_get_drvdata(dev);
+   struct pci_dev *pdev = to_pci_dev(dev);
 
-   if (pci_enable_device(vbox->ddev.pdev))
+   if (pci_enable_device(pdev))
return -EIO;
 
return drm_mode_config_helper_resume(>ddev);
diff --git a/drivers/gpu/drm/vboxvideo/vbox_irq.c 
b/drivers/gpu/drm/vboxvideo/vbox_irq.c
index 631657fa554f..b3ded68603ba 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_irq.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_irq.c
@@ -170,10 +170,12 @@ static void vbox_hotplug_worker(struct work_struct *work)
 
 int vbox_irq_init(struct vbox_private *vbox)
 {
+   struct pci_dev *pdev = to_pci_dev(vbox->ddev.dev);
+
INIT_WORK(>hotplug_work, vbox_hotplug_worker);
vbox_update_mode_hints(vbox);
 
-   return drm_irq_install(>ddev, vbox->ddev.pdev->irq);
+   return drm_irq_install(>ddev, pdev->irq);
 }
 
 void vbox_irq_fini(struct vbox_private *vbox)
diff --git a/drivers/gpu/drm/vboxvideo/vbox_main.c 
b/drivers/gpu/drm/vboxvideo/vbox_main.c
index d68d9bad7674..f28779715ccd 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_main.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_main.c
@@ -8,7 +8,9 @@
  *  Hans de Goede 
  */
 
+#include 
 #include 
+
 #include 
 #include 
 #include 
@@ -30,6 +32,7 @@ void vbox_report_caps(struct vbox_private *vbox)
 
 static int vbox_accel_init(struct vbox_private *vbox)
 {
+   struct pci_dev *pdev = to_pci_dev(vbox->ddev.dev);
struct vbva_buffer *vbva;
unsigned int i;
 
@@ -41,7 +44,7 @@ static int vbox_accel_init(struct vbox_private *vbox)
/* Take a command buffer for each screen from the end of usable VRAM. */
vbox->available_vram_size -= vbox->num_crtcs * VBVA_MIN_BUFFER_SIZE;
 
-   vbox->vbva_buffers = pci_iomap_range(vbox->ddev.pdev, 0,
+   vbox->vbva_buffers = pci_iomap_range(pdev, 0,
 vbox->available_vram_size,
 vbox->num_crtcs *
 VBVA_MIN_BUFFER_SIZE);
@@ -106,6 +109,7 @@ bool vbox_check_supported(u16 id)
 
 int vbox_hw_init(struct vbox_private *vbox)
 {
+   struct pci_dev *pdev = to_pci_dev(vbox->ddev.dev);
int ret = -ENOMEM;
 
vbox->full_vram_size = inl(VBE_DISPI_IOPORT_DATA);
@@ -115,7 +119,7 @@ int vbox_hw_init(struct vbox_private *vbox)
 
/* Map guest-heap at end of vram */
vbox->guest_heap =
-   pci_iomap_range(vbox->ddev.pdev, 0, GUEST_HEAP_OFFSET(vbox),
+   pci_iomap_range(pdev, 0, GUEST_HEAP_OFFSET(vbox),
GUEST_HEAP_SIZE);
if (!vbox->guest_heap)
return -ENOMEM;
diff --git a/drivers/gpu/drm/vboxvideo/vbox_ttm.c 
b/drivers/gpu/drm/vboxvideo/vbox_ttm.c
index f5a06675da43..0066a3c1dfc9 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_ttm.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_ttm.c
@@ -15,8 +15,9 @@ int vbox_mm_init(struct vbox_private *vbox)
struct drm_vram_mm *vmm;
int ret;
struct drm_device *dev = >ddev;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
 
-   vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(dev->pdev, 0),
+   vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(pdev, 0),
   

[PATCH 03/15] drm/bochs: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert bochs to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Gerd Hoffmann 
---
 drivers/gpu/drm/bochs/bochs_drv.c | 1 -
 drivers/gpu/drm/bochs/bochs_hw.c  | 4 ++--
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/bochs/bochs_drv.c 
b/drivers/gpu/drm/bochs/bochs_drv.c
index fd454225fd19..b469624fe40d 100644
--- a/drivers/gpu/drm/bochs/bochs_drv.c
+++ b/drivers/gpu/drm/bochs/bochs_drv.c
@@ -121,7 +121,6 @@ static int bochs_pci_probe(struct pci_dev *pdev,
if (ret)
goto err_free_dev;
 
-   dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
 
ret = bochs_load(dev);
diff --git a/drivers/gpu/drm/bochs/bochs_hw.c b/drivers/gpu/drm/bochs/bochs_hw.c
index dce4672e3fc8..2d7380a9890e 100644
--- a/drivers/gpu/drm/bochs/bochs_hw.c
+++ b/drivers/gpu/drm/bochs/bochs_hw.c
@@ -110,7 +110,7 @@ int bochs_hw_load_edid(struct bochs_device *bochs)
 int bochs_hw_init(struct drm_device *dev)
 {
struct bochs_device *bochs = dev->dev_private;
-   struct pci_dev *pdev = dev->pdev;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
unsigned long addr, size, mem, ioaddr, iosize;
u16 id;
 
@@ -201,7 +201,7 @@ void bochs_hw_fini(struct drm_device *dev)
release_region(VBE_DISPI_IOPORT_INDEX, 2);
if (bochs->fb_map)
iounmap(bochs->fb_map);
-   pci_release_regions(dev->pdev);
+   pci_release_regions(to_pci_dev(dev->dev));
kfree(bochs->edid);
 }
 
-- 
2.29.2

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 13/15] drm/virtgpu: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert virtgpu to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c 
b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 27f13bd29c13..a21dc3ad6f88 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -54,7 +54,6 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, 
struct virtio_device *vd
DRM_INFO("pci: %s detected at %s\n",
 vga ? "virtio-vga" : "virtio-gpu-pci",
 pname);
-   dev->pdev = pdev;
if (vga)
drm_fb_helper_remove_conflicting_pci_framebuffers(pdev,
  
"virtiodrmfb");
-- 
2.29.2

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 05/15] drm/gma500: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert gma500 to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Patrik Jakobsson 
---
 drivers/gpu/drm/gma500/cdv_device.c| 30 +++---
 drivers/gpu/drm/gma500/cdv_intel_crt.c |  3 +-
 drivers/gpu/drm/gma500/cdv_intel_lvds.c|  4 +--
 drivers/gpu/drm/gma500/framebuffer.c   |  9 +++---
 drivers/gpu/drm/gma500/gma_device.c|  3 +-
 drivers/gpu/drm/gma500/gma_display.c   |  4 +--
 drivers/gpu/drm/gma500/gtt.c   | 20 ++--
 drivers/gpu/drm/gma500/intel_bios.c|  6 ++--
 drivers/gpu/drm/gma500/intel_gmbus.c   |  4 +--
 drivers/gpu/drm/gma500/intel_i2c.c |  2 +-
 drivers/gpu/drm/gma500/mdfld_device.c  |  4 ++-
 drivers/gpu/drm/gma500/mdfld_dsi_dpi.c |  8 ++---
 drivers/gpu/drm/gma500/mid_bios.c  |  9 --
 drivers/gpu/drm/gma500/oaktrail_device.c   |  5 +--
 drivers/gpu/drm/gma500/oaktrail_lvds.c |  2 +-
 drivers/gpu/drm/gma500/oaktrail_lvds_i2c.c |  2 +-
 drivers/gpu/drm/gma500/opregion.c  |  3 +-
 drivers/gpu/drm/gma500/power.c | 13 
 drivers/gpu/drm/gma500/psb_drv.c   | 16 +-
 drivers/gpu/drm/gma500/psb_drv.h   |  8 ++---
 drivers/gpu/drm/gma500/psb_intel_lvds.c|  6 ++--
 drivers/gpu/drm/gma500/psb_intel_sdvo.c|  2 +-
 drivers/gpu/drm/gma500/tc35876x-dsi-lvds.c | 36 +++---
 23 files changed, 109 insertions(+), 90 deletions(-)

diff --git a/drivers/gpu/drm/gma500/cdv_device.c 
b/drivers/gpu/drm/gma500/cdv_device.c
index e75293e4a52f..19e055dbd4c2 100644
--- a/drivers/gpu/drm/gma500/cdv_device.c
+++ b/drivers/gpu/drm/gma500/cdv_device.c
@@ -95,13 +95,14 @@ static u32 cdv_get_max_backlight(struct drm_device *dev)
 static int cdv_get_brightness(struct backlight_device *bd)
 {
struct drm_device *dev = bl_get_data(bd);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
u32 val = REG_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK;
 
if (cdv_backlight_combination_mode(dev)) {
u8 lbpc;
 
val &= ~1;
-   pci_read_config_byte(dev->pdev, 0xF4, );
+   pci_read_config_byte(pdev, 0xF4, );
val *= lbpc;
}
return (val * 100)/cdv_get_max_backlight(dev);
@@ -111,6 +112,7 @@ static int cdv_get_brightness(struct backlight_device *bd)
 static int cdv_set_brightness(struct backlight_device *bd)
 {
struct drm_device *dev = bl_get_data(bd);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
int level = bd->props.brightness;
u32 blc_pwm_ctl;
 
@@ -128,7 +130,7 @@ static int cdv_set_brightness(struct backlight_device *bd)
lbpc = level * 0xfe / max + 1;
level /= lbpc;
 
-   pci_write_config_byte(dev->pdev, 0xF4, lbpc);
+   pci_write_config_byte(pdev, 0xF4, lbpc);
}
 
blc_pwm_ctl = REG_READ(BLC_PWM_CTL) & ~BACKLIGHT_DUTY_CYCLE_MASK;
@@ -205,8 +207,9 @@ static inline void CDV_MSG_WRITE32(int domain, uint port, 
uint offset,
 static void cdv_init_pm(struct drm_device *dev)
 {
struct drm_psb_private *dev_priv = dev->dev_private;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
u32 pwr_cnt;
-   int domain = pci_domain_nr(dev->pdev->bus);
+   int domain = pci_domain_nr(pdev->bus);
int i;
 
dev_priv->apm_base = CDV_MSG_READ32(domain, PSB_PUNIT_PORT,
@@ -234,6 +237,8 @@ static void cdv_init_pm(struct drm_device *dev)
 
 static void cdv_errata(struct drm_device *dev)
 {
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
+
/* Disable bonus launch.
 *  CPU and GPU competes for memory and display misses updates and
 *  flickers. Worst with dual core, dual displays.
@@ -242,7 +247,7 @@ static void cdv_errata(struct drm_device *dev)
 *  Bonus Launch to work around the issue, by degrading
 *  performance.
 */
-CDV_MSG_WRITE32(pci_domain_nr(dev->pdev->bus), 3, 0x30, 0x08027108);
+CDV_MSG_WRITE32(pci_domain_nr(pdev->bus), 3, 0x30, 0x08027108);
 }
 
 /**
@@ -255,12 +260,13 @@ static void cdv_errata(struct drm_device *dev)
 static int cdv_save_display_registers(struct drm_device *dev)
 {
struct drm_psb_private *dev_priv = dev->dev_private;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
struct psb_save_area *regs = _priv->regs;
struct drm_connector *connector;
 
dev_dbg(dev->dev, "Saving GPU registers.\n");
 
-   pci_read_config_byte(dev->pdev, 0xF4, >cdv.saveLBB);
+   pci_read_config_byte(pdev, 0xF4, >cdv.saveLBB);
 
regs->cdv.saveDSPCLK_GATE_D = REG_READ(DSPCLK_GATE_D);
regs->cdv.saveRAMCLK_GATE_D = REG_READ(RAMCLK_GATE_D);
@@ -309,11 +315,12 @@ static int cdv_save_display_registers(struct drm_device 
*dev)
 static int cdv_restore_display_registers(struct drm_device *dev)
 {
struct drm_psb_private 

[PATCH 09/15] drm/nouveau: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert nouveau to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Ben Skeggs 
---
 drivers/gpu/drm/nouveau/dispnv04/arb.c  | 12 +++-
 drivers/gpu/drm/nouveau/dispnv04/disp.h | 14 --
 drivers/gpu/drm/nouveau/dispnv04/hw.c   | 10 ++
 drivers/gpu/drm/nouveau/nouveau_abi16.c |  7 ---
 drivers/gpu/drm/nouveau/nouveau_acpi.c  |  2 +-
 drivers/gpu/drm/nouveau/nouveau_bios.c  | 11 ---
 drivers/gpu/drm/nouveau/nouveau_connector.c | 10 ++
 drivers/gpu/drm/nouveau/nouveau_drm.c   |  5 ++---
 drivers/gpu/drm/nouveau/nouveau_fbcon.c |  6 --
 drivers/gpu/drm/nouveau/nouveau_vga.c   | 20 
 10 files changed, 58 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/dispnv04/arb.c 
b/drivers/gpu/drm/nouveau/dispnv04/arb.c
index 9d4a2d97507e..1d3542d6006b 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/arb.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/arb.c
@@ -200,16 +200,17 @@ nv04_update_arb(struct drm_device *dev, int VClk, int bpp,
int MClk = nouveau_hw_get_clock(dev, PLL_MEMORY);
int NVClk = nouveau_hw_get_clock(dev, PLL_CORE);
uint32_t cfg1 = nvif_rd32(device, NV04_PFB_CFG1);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
 
sim_data.pclk_khz = VClk;
sim_data.mclk_khz = MClk;
sim_data.nvclk_khz = NVClk;
sim_data.bpp = bpp;
sim_data.two_heads = nv_two_heads(dev);
-   if ((dev->pdev->device & 0x) == 0x01a0 /*CHIPSET_NFORCE*/ ||
-   (dev->pdev->device & 0x) == 0x01f0 /*CHIPSET_NFORCE2*/) {
+   if ((pdev->device & 0x) == 0x01a0 /*CHIPSET_NFORCE*/ ||
+   (pdev->device & 0x) == 0x01f0 /*CHIPSET_NFORCE2*/) {
uint32_t type;
-   int domain = pci_domain_nr(dev->pdev->bus);
+   int domain = pci_domain_nr(pdev->bus);
 
pci_read_config_dword(pci_get_domain_bus_and_slot(domain, 0, 1),
  0x7c, );
@@ -251,11 +252,12 @@ void
 nouveau_calc_arb(struct drm_device *dev, int vclk, int bpp, int *burst, int 
*lwm)
 {
struct nouveau_drm *drm = nouveau_drm(dev);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
 
if (drm->client.device.info.family < NV_DEVICE_INFO_V0_KELVIN)
nv04_update_arb(dev, vclk, bpp, burst, lwm);
-   else if ((dev->pdev->device & 0xfff0) == 0x0240 /*CHIPSET_C51*/ ||
-(dev->pdev->device & 0xfff0) == 0x03d0 /*CHIPSET_C512*/) {
+   else if ((pdev->device & 0xfff0) == 0x0240 /*CHIPSET_C51*/ ||
+(pdev->device & 0xfff0) == 0x03d0 /*CHIPSET_C512*/) {
*burst = 128;
*lwm = 0x0480;
} else
diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.h 
b/drivers/gpu/drm/nouveau/dispnv04/disp.h
index 5ace5e906949..f0a24126641a 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/disp.h
+++ b/drivers/gpu/drm/nouveau/dispnv04/disp.h
@@ -130,7 +130,7 @@ static inline bool
 nv_two_heads(struct drm_device *dev)
 {
struct nouveau_drm *drm = nouveau_drm(dev);
-   const int impl = dev->pdev->device & 0x0ff0;
+   const int impl = to_pci_dev(dev->dev)->device & 0x0ff0;
 
if (drm->client.device.info.family >= NV_DEVICE_INFO_V0_CELSIUS && impl 
!= 0x0100 &&
impl != 0x0150 && impl != 0x01a0 && impl != 0x0200)
@@ -142,14 +142,14 @@ nv_two_heads(struct drm_device *dev)
 static inline bool
 nv_gf4_disp_arch(struct drm_device *dev)
 {
-   return nv_two_heads(dev) && (dev->pdev->device & 0x0ff0) != 0x0110;
+   return nv_two_heads(dev) && (to_pci_dev(dev->dev)->device & 0x0ff0) != 
0x0110;
 }
 
 static inline bool
 nv_two_reg_pll(struct drm_device *dev)
 {
struct nouveau_drm *drm = nouveau_drm(dev);
-   const int impl = dev->pdev->device & 0x0ff0;
+   const int impl = to_pci_dev(dev->dev)->device & 0x0ff0;
 
if (impl == 0x0310 || impl == 0x0340 || drm->client.device.info.family 
>= NV_DEVICE_INFO_V0_CURIE)
return true;
@@ -160,9 +160,11 @@ static inline bool
 nv_match_device(struct drm_device *dev, unsigned device,
unsigned sub_vendor, unsigned sub_device)
 {
-   return dev->pdev->device == device &&
-   dev->pdev->subsystem_vendor == sub_vendor &&
-   dev->pdev->subsystem_device == sub_device;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
+
+   return pdev->device == device &&
+   pdev->subsystem_vendor == sub_vendor &&
+   pdev->subsystem_device == sub_device;
 }
 
 #include 
diff --git a/drivers/gpu/drm/nouveau/dispnv04/hw.c 
b/drivers/gpu/drm/nouveau/dispnv04/hw.c
index b674d68ef28a..f7d35657aa64 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/hw.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/hw.c
@@ -214,14 +214,15 @@ nouveau_hw_pllvals_to_clk(struct nvkm_pll_vals *pv)
 int
 nouveau_hw_get_clock(struct 

[PATCH 08/15] drm/mgag200: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert mgag200 to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/mgag200/mgag200_drv.c | 20 +++-
 drivers/gpu/drm/mgag200/mgag200_i2c.c |  2 +-
 drivers/gpu/drm/mgag200/mgag200_mm.c  | 10 ++
 3 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.c 
b/drivers/gpu/drm/mgag200/mgag200_drv.c
index a977c9f49719..4e4c105f9a50 100644
--- a/drivers/gpu/drm/mgag200/mgag200_drv.c
+++ b/drivers/gpu/drm/mgag200/mgag200_drv.c
@@ -47,10 +47,11 @@ static const struct drm_driver mgag200_driver = {
 static bool mgag200_has_sgram(struct mga_device *mdev)
 {
struct drm_device *dev = >base;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
u32 option;
int ret;
 
-   ret = pci_read_config_dword(dev->pdev, PCI_MGA_OPTION, );
+   ret = pci_read_config_dword(pdev, PCI_MGA_OPTION, );
if (drm_WARN(dev, ret, "failed to read PCI config dword: %d\n", ret))
return false;
 
@@ -60,6 +61,7 @@ static bool mgag200_has_sgram(struct mga_device *mdev)
 static int mgag200_regs_init(struct mga_device *mdev)
 {
struct drm_device *dev = >base;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
u32 option, option2;
u8 crtcext3;
 
@@ -99,13 +101,13 @@ static int mgag200_regs_init(struct mga_device *mdev)
}
 
if (option)
-   pci_write_config_dword(dev->pdev, PCI_MGA_OPTION, option);
+   pci_write_config_dword(pdev, PCI_MGA_OPTION, option);
if (option2)
-   pci_write_config_dword(dev->pdev, PCI_MGA_OPTION2, option2);
+   pci_write_config_dword(pdev, PCI_MGA_OPTION2, option2);
 
/* BAR 1 contains registers */
-   mdev->rmmio_base = pci_resource_start(dev->pdev, 1);
-   mdev->rmmio_size = pci_resource_len(dev->pdev, 1);
+   mdev->rmmio_base = pci_resource_start(pdev, 1);
+   mdev->rmmio_size = pci_resource_len(pdev, 1);
 
if (!devm_request_mem_region(dev->dev, mdev->rmmio_base,
 mdev->rmmio_size, "mgadrmfb_mmio")) {
@@ -113,7 +115,7 @@ static int mgag200_regs_init(struct mga_device *mdev)
return -ENOMEM;
}
 
-   mdev->rmmio = pcim_iomap(dev->pdev, 1, 0);
+   mdev->rmmio = pcim_iomap(pdev, 1, 0);
if (mdev->rmmio == NULL)
return -ENOMEM;
 
@@ -218,6 +220,7 @@ static void mgag200_g200_interpret_bios(struct mga_device 
*mdev,
 static void mgag200_g200_init_refclk(struct mga_device *mdev)
 {
struct drm_device *dev = >base;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
unsigned char __iomem *rom;
unsigned char *bios;
size_t size;
@@ -226,7 +229,7 @@ static void mgag200_g200_init_refclk(struct mga_device 
*mdev)
mdev->model.g200.pclk_max = 23;
mdev->model.g200.ref_clk = 27050;
 
-   rom = pci_map_rom(dev->pdev, );
+   rom = pci_map_rom(pdev, );
if (!rom)
return;
 
@@ -244,7 +247,7 @@ static void mgag200_g200_init_refclk(struct mga_device 
*mdev)
 
vfree(bios);
 out:
-   pci_unmap_rom(dev->pdev, rom);
+   pci_unmap_rom(pdev, rom);
 }
 
 static void mgag200_g200se_init_unique_id(struct mga_device *mdev)
@@ -301,7 +304,6 @@ mgag200_device_create(struct pci_dev *pdev, unsigned long 
flags)
return mdev;
dev = >base;
 
-   dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
 
ret = mgag200_device_init(mdev, flags);
diff --git a/drivers/gpu/drm/mgag200/mgag200_i2c.c 
b/drivers/gpu/drm/mgag200/mgag200_i2c.c
index 09731e614e46..ac8e34eef513 100644
--- a/drivers/gpu/drm/mgag200/mgag200_i2c.c
+++ b/drivers/gpu/drm/mgag200/mgag200_i2c.c
@@ -126,7 +126,7 @@ struct mga_i2c_chan *mgag200_i2c_create(struct drm_device 
*dev)
i2c->clock = clock;
i2c->adapter.owner = THIS_MODULE;
i2c->adapter.class = I2C_CLASS_DDC;
-   i2c->adapter.dev.parent = >pdev->dev;
+   i2c->adapter.dev.parent = dev->dev;
i2c->dev = dev;
i2c_set_adapdata(>adapter, i2c);
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name), "mga i2c");
diff --git a/drivers/gpu/drm/mgag200/mgag200_mm.c 
b/drivers/gpu/drm/mgag200/mgag200_mm.c
index 641f1aa992be..b667371b69a4 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mm.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mm.c
@@ -78,11 +78,12 @@ static size_t mgag200_probe_vram(struct mga_device *mdev, 
void __iomem *mem,
 static void mgag200_mm_release(struct drm_device *dev, void *ptr)
 {
struct mga_device *mdev = to_mga_device(dev);
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
 
mdev->vram_fb_available = 0;
iounmap(mdev->vram);
-   arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
-   pci_resource_len(dev->pdev, 0));
+   

[PATCH 06/15] drm/hibmc: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert hibmc to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Xinliang Liu 
Cc: Tian Tao  
Cc: John Stultz 
Cc: Xinwei Kong 
Cc: Chen Feng 
---
 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c | 10 +-
 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c |  2 +-
 drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c |  4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c 
b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
index d845657fd99c..ac5868343d0c 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
@@ -203,7 +203,7 @@ static void hibmc_hw_config(struct hibmc_drm_private *priv)
 static int hibmc_hw_map(struct hibmc_drm_private *priv)
 {
struct drm_device *dev = priv->dev;
-   struct pci_dev *pdev = dev->pdev;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
resource_size_t addr, size, ioaddr, iosize;
 
ioaddr = pci_resource_start(pdev, 1);
@@ -249,7 +249,7 @@ static int hibmc_unload(struct drm_device *dev)
if (dev->irq_enabled)
drm_irq_uninstall(dev);
 
-   pci_disable_msi(dev->pdev);
+   pci_disable_msi(to_pci_dev(dev->dev));
hibmc_kms_fini(priv);
hibmc_mm_fini(priv);
dev->dev_private = NULL;
@@ -258,6 +258,7 @@ static int hibmc_unload(struct drm_device *dev)
 
 static int hibmc_load(struct drm_device *dev)
 {
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
struct hibmc_drm_private *priv;
int ret;
 
@@ -287,11 +288,11 @@ static int hibmc_load(struct drm_device *dev)
goto err;
}
 
-   ret = pci_enable_msi(dev->pdev);
+   ret = pci_enable_msi(pdev);
if (ret) {
drm_warn(dev, "enabling MSI failed: %d\n", ret);
} else {
-   ret = drm_irq_install(dev, dev->pdev->irq);
+   ret = drm_irq_install(dev, pdev->irq);
if (ret)
drm_warn(dev, "install irq failed: %d\n", ret);
}
@@ -324,7 +325,6 @@ static int hibmc_pci_probe(struct pci_dev *pdev,
return PTR_ERR(dev);
}
 
-   dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
 
ret = pci_enable_device(pdev);
diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c 
b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
index 86d712090d87..410bd019bb35 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
@@ -83,7 +83,7 @@ int hibmc_ddc_create(struct drm_device *drm_dev,
connector->adapter.owner = THIS_MODULE;
connector->adapter.class = I2C_CLASS_DDC;
snprintf(connector->adapter.name, I2C_NAME_SIZE, "HIS i2c bit bus");
-   connector->adapter.dev.parent = _dev->pdev->dev;
+   connector->adapter.dev.parent = drm_dev->dev;
i2c_set_adapdata(>adapter, connector);
connector->adapter.algo_data = >bit_data;
 
diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c 
b/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
index 602ece11bb4a..77f075075db2 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
@@ -26,9 +26,9 @@ int hibmc_mm_init(struct hibmc_drm_private *hibmc)
struct drm_vram_mm *vmm;
int ret;
struct drm_device *dev = hibmc->dev;
+   struct pci_dev *pdev = to_pci_dev(dev->dev);
 
-   vmm = drm_vram_helper_alloc_mm(dev,
-  pci_resource_start(dev->pdev, 0),
+   vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(pdev, 0),
   hibmc->fb_size);
if (IS_ERR(vmm)) {
ret = PTR_ERR(vmm);
-- 
2.29.2

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 01/15] drm/amdgpu: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert amdgpu to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Alex Deucher 
Cc: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 23 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |  1 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 10 -
 7 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 7560b05e4ac1..d61715133825 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1404,9 +1404,9 @@ static void amdgpu_switcheroo_set_state(struct pci_dev 
*pdev,
/* don't suspend or resume card normally */
dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
 
-   pci_set_power_state(dev->pdev, PCI_D0);
-   amdgpu_device_load_pci_state(dev->pdev);
-   r = pci_enable_device(dev->pdev);
+   pci_set_power_state(pdev, PCI_D0);
+   amdgpu_device_load_pci_state(pdev);
+   r = pci_enable_device(pdev);
if (r)
DRM_WARN("pci_enable_device failed (%d)\n", r);
amdgpu_device_resume(dev, true);
@@ -1418,10 +1418,10 @@ static void amdgpu_switcheroo_set_state(struct pci_dev 
*pdev,
drm_kms_helper_poll_disable(dev);
dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
amdgpu_device_suspend(dev, true);
-   amdgpu_device_cache_pci_state(dev->pdev);
+   amdgpu_device_cache_pci_state(pdev);
/* Shut down the device */
-   pci_disable_device(dev->pdev);
-   pci_set_power_state(dev->pdev, PCI_D3cold);
+   pci_disable_device(pdev);
+   pci_set_power_state(pdev, PCI_D3cold);
dev->switch_power_state = DRM_SWITCH_POWER_OFF;
}
 }
@@ -1684,8 +1684,7 @@ static void amdgpu_device_enable_virtual_display(struct 
amdgpu_device *adev)
adev->enable_virtual_display = false;
 
if (amdgpu_virtual_display) {
-   struct drm_device *ddev = adev_to_drm(adev);
-   const char *pci_address_name = pci_name(ddev->pdev);
+   const char *pci_address_name = pci_name(adev->pdev);
char *pciaddstr, *pciaddstr_tmp, *pciaddname_tmp, *pciaddname;
 
pciaddstr = kstrdup(amdgpu_virtual_display, GFP_KERNEL);
@@ -3375,7 +3374,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
}
}
 
-   pci_enable_pcie_error_reporting(adev->ddev.pdev);
+   pci_enable_pcie_error_reporting(adev->pdev);
 
/* Post card if necessary */
if (amdgpu_device_need_post(adev)) {
@@ -4922,8 +4921,8 @@ pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev 
*pdev, pci_channel_sta
case pci_channel_io_normal:
return PCI_ERS_RESULT_CAN_RECOVER;
/* Fatal error, prepare for slot reset */
-   case pci_channel_io_frozen: 
-   /*  
+   case pci_channel_io_frozen:
+   /*
 * Cancel and wait for all TDRs in progress if failing to
 * set  adev->in_gpu_reset in amdgpu_device_lock_adev
 *
@@ -5014,7 +5013,7 @@ pci_ers_result_t amdgpu_pci_slot_reset(struct pci_dev 
*pdev)
goto out;
}
 
-   adev->in_pci_err_recovery = true;   
+   adev->in_pci_err_recovery = true;
r = amdgpu_device_pre_asic_reset(adev, NULL, _full_reset);
adev->in_pci_err_recovery = false;
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index 2e8a8b57639f..77974c3981fa 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -721,13 +721,14 @@ amdgpu_display_user_framebuffer_create(struct drm_device 
*dev,
   struct drm_file *file_priv,
   const struct drm_mode_fb_cmd2 *mode_cmd)
 {
+   struct amdgpu_device *adev = drm_to_adev(dev);
struct drm_gem_object *obj;
struct amdgpu_framebuffer *amdgpu_fb;
int ret;
 
obj = drm_gem_object_lookup(file_priv, mode_cmd->handles[0]);
if (obj ==  NULL) {
-   dev_err(>pdev->dev, "No GEM object associated to handle 
0x%08X, "
+   dev_err(>pdev->dev, "No GEM object associated to handle 
0x%08X, "
"can't create framebuffer\n", mode_cmd->handles[0]);
return ERR_PTR(-ENOENT);
}
diff 

[PATCH 00/15] drm: Move struct drm_device.pdev to legacy

2020-11-24 Thread Thomas Zimmermann
The pdev field in struct drm_device points to a PCI device structure and
goes back to UMS-only days when all DRM drivers where for PCI devices.
Meanwhile we also support USB, SPI and platform devices. Each of those
uses the generic device stored in struct drm_device.dev.

To reduce duplications and remove the special case of PCI, this patchset
converts all modesetting drivers from pdev to dev and makes pdev a field
for legacy UMS drivers.

For PCI devices, the pointer in struct drm_device.dev can be upcasted to
struct pci_device; or tested for PCI with dev_is_pci(). In several places
the code can use the dev field directly.

After converting all drivers and the DRM core, the pdev fields becomes
only relevant for legacy drivers. In a later patchset, we may want to
convert these as well and remove pdev entirely.

The patchset touches many files, but the individual changes are mostly
trivial. I suggest to merge each driver's patch through the respective
tree and later the rest through drm-misc-next.

Thomas Zimmermann (15):
  drm/amdgpu: Remove references to struct drm_device.pdev
  drm/ast: Remove references to struct drm_device.pdev
  drm/bochs: Remove references to struct drm_device.pdev
  drm/cirrus: Remove references to struct drm_device.pdev
  drm/gma500: Remove references to struct drm_device.pdev
  drm/hibmc: Remove references to struct drm_device.pdev
  drm/i915: Remove references to struct drm_device.pdev
  drm/mgag200: Remove references to struct drm_device.pdev
  drm/nouveau: Remove references to struct drm_device.pdev
  drm/qxl: Remove references to struct drm_device.pdev
  drm/radeon: Remove references to struct drm_device.pdev
  drm/vboxvideo: Remove references to struct drm_device.pdev
  drm/virtgpu: Remove references to struct drm_device.pdev
  drm/vmwgfx: Remove references to struct drm_device.pdev
  drm: Upcast struct drm_device.dev to struct pci_device; replace pdev

 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c| 23 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |  3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  1 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   | 10 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 10 +--
 drivers/gpu/drm/ast/ast_drv.c |  4 +-
 drivers/gpu/drm/ast/ast_main.c| 25 +++---
 drivers/gpu/drm/ast/ast_mm.c  | 17 ++--
 drivers/gpu/drm/ast/ast_mode.c|  5 +-
 drivers/gpu/drm/ast/ast_post.c|  8 +-
 drivers/gpu/drm/bochs/bochs_drv.c |  1 -
 drivers/gpu/drm/bochs/bochs_hw.c  |  4 +-
 drivers/gpu/drm/drm_agpsupport.c  |  9 +-
 drivers/gpu/drm/drm_bufs.c|  4 +-
 drivers/gpu/drm/drm_edid.c|  7 +-
 drivers/gpu/drm/drm_irq.c | 12 +--
 drivers/gpu/drm/drm_pci.c | 26 +++---
 drivers/gpu/drm/drm_vm.c  |  2 +-
 drivers/gpu/drm/gma500/cdv_device.c   | 30 ---
 drivers/gpu/drm/gma500/cdv_intel_crt.c|  3 +-
 drivers/gpu/drm/gma500/cdv_intel_lvds.c   |  4 +-
 drivers/gpu/drm/gma500/framebuffer.c  |  9 +-
 drivers/gpu/drm/gma500/gma_device.c   |  3 +-
 drivers/gpu/drm/gma500/gma_display.c  |  4 +-
 drivers/gpu/drm/gma500/gtt.c  | 20 +++--
 drivers/gpu/drm/gma500/intel_bios.c   |  6 +-
 drivers/gpu/drm/gma500/intel_gmbus.c  |  4 +-
 drivers/gpu/drm/gma500/intel_i2c.c|  2 +-
 drivers/gpu/drm/gma500/mdfld_device.c |  4 +-
 drivers/gpu/drm/gma500/mdfld_dsi_dpi.c|  8 +-
 drivers/gpu/drm/gma500/mid_bios.c |  9 +-
 drivers/gpu/drm/gma500/oaktrail_device.c  |  5 +-
 drivers/gpu/drm/gma500/oaktrail_lvds.c|  2 +-
 drivers/gpu/drm/gma500/oaktrail_lvds_i2c.c|  2 +-
 drivers/gpu/drm/gma500/opregion.c |  3 +-
 drivers/gpu/drm/gma500/power.c| 13 +--
 drivers/gpu/drm/gma500/psb_drv.c  | 16 ++--
 drivers/gpu/drm/gma500/psb_drv.h  |  8 +-
 drivers/gpu/drm/gma500/psb_intel_lvds.c   |  6 +-
 drivers/gpu/drm/gma500/psb_intel_sdvo.c   |  2 +-
 drivers/gpu/drm/gma500/tc35876x-dsi-lvds.c| 36 
 .../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c   | 10 +--
 .../gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c   |  2 +-
 drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c   |  4 +-
 drivers/gpu/drm/i915/display/intel_bios.c |  2 +-
 drivers/gpu/drm/i915/display/intel_cdclk.c| 14 +--
 drivers/gpu/drm/i915/display/intel_csr.c  |  2 +-
 drivers/gpu/drm/i915/display/intel_dsi_vbt.c  |  2 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c|  2 +-
 drivers/gpu/drm/i915/display/intel_gmbus.c|  2 +-
 .../gpu/drm/i915/display/intel_lpe_audio.c|  5 +-
 drivers/gpu/drm/i915/display/intel_opregion.c |  6 +-
 drivers/gpu/drm/i915/display/intel_overlay.c  |  2 +-
 

[PATCH 04/15] drm/cirrus: Remove references to struct drm_device.pdev

2020-11-24 Thread Thomas Zimmermann
Using struct drm_device.pdev is deprecated. Convert cirrus to struct
drm_device.dev. No functional changes.

Signed-off-by: Thomas Zimmermann 
Cc: Gerd Hoffmann 
---
 drivers/gpu/drm/tiny/cirrus.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 561c49d8657a..a043e602199e 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -602,7 +602,6 @@ static int cirrus_pci_probe(struct pci_dev *pdev,
 
drm_mode_config_reset(dev);
 
-   dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
ret = drm_dev_register(dev, 0);
if (ret)
-- 
2.29.2

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Christian König

Am 24.11.20 um 10:16 schrieb Thomas Zimmermann:

Hi Christian

Am 16.11.20 um 12:28 schrieb Christian König:

Am 13.11.20 um 08:59 schrieb Thomas Zimmermann:

Hi Christian

Am 12.11.20 um 18:16 schrieb Christian König:

Am 12.11.20 um 14:21 schrieb Thomas Zimmermann:

In order to avoid eviction of vmap'ed buffers, pin them in their GEM
object's vmap implementation. Unpin them in the vunmap 
implementation.

This is needed to make generic fbdev support work reliably. Without,
the buffer object could be evicted while fbdev flushed its shadow 
buffer.


In difference to the PRIME pin/unpin functions, the vmap code does 
not
modify the BOs prime_shared_count, so a vmap-pinned BO does not 
count as

shared.

The actual pin location is not important as the vmap call returns
information on how to access the buffer. Callers that require a
specific location should explicitly pin the BO before vmapping it.

Well is the buffer supposed to be scanned out?

No, not by the fbdev helper.


Ok in this case that should work.


If yes then the pin location is actually rather important since the
hardware can only scan out from VRAM.

For relocatable BOs, fbdev uses a shadow buffer that makes all any
relocation transparent to userspace. It flushes the shadow fb into the
BO's memory if there are updates. The code is in
drm_fb_helper_dirty_work(). [1] During the flush operation, the vmap
call now pins the BO to wherever it is. The actual location does not
matter. It's vunmap'ed immediately afterwards.


The problem is what happens when it is prepared for scanout, but 
can't be moved to VRAM because it is vmapped?


When the shadow is never scanned out that isn't a problem, but we 
need to keep that in mind.




I'd like ask for your suggestions before sending an update for this 
patch.


After the discussion about locking in fbdev, [1] I intended to replace 
the pin call with code that acquires the reservation lock.


Yeah, that sounds like a good idea to me as well.

First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but then 
wondered why ttm_bo_vmap() doe not acquire the lock internally? I'd 
expect that vmap/vunmap are close together and do not overlap for the 
same BO. 


We have use cases like the following during command submission:

1. lock
2. map
3. copy parts of the BO content somewhere else or patch it with 
additional information

4. unmap
5. submit BO to the hardware
6. add hardware fence to the BO to make sure it doesn't move
7. unlock

That use case won't be possible with vmap/vunmap if we move the 
lock/unlock into it and I hope to replace the kmap/kunmap functions with 
them in the near term.


Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper as 
you initially planned.


Regards,
Christian.



Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/401088/?series=83918=1


Regards,
Christian.



For dma-buf sharing, the regular procedure of pin + vmap still apply.
This should always move the BO into GTT-managed memory.

Best regards
Thomas

[1]
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%23n432data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=RLauuAuXkcl0rXwWWJ%2FrKP%2BsCr2wAzU1ejGV1bnQ80w%3Dreserved=0 




Regards,
Christian.


Signed-off-by: Thomas Zimmermann 
---
   drivers/gpu/drm/radeon/radeon_gem.c | 51 
+++--

   1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c
b/drivers/gpu/drm/radeon/radeon_gem.c
index d2876ce3bc9e..eaf7fc9a7b07 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -226,6 +226,53 @@ static int radeon_gem_handle_lockup(struct
radeon_device *rdev, int r)
   return r;
   }
   +static int radeon_gem_object_vmap(struct drm_gem_object *obj,
struct dma_buf_map *map)
+{
+    static const uint32_t any_domain = RADEON_GEM_DOMAIN_VRAM |
+   RADEON_GEM_DOMAIN_GTT |
+   RADEON_GEM_DOMAIN_CPU;
+
+    struct radeon_bo *bo = gem_to_radeon_bo(obj);
+    int ret;
+
+    ret = radeon_bo_reserve(bo, false);
+    if (ret)
+    return ret;
+
+    /* pin buffer at its current location */
+    ret = radeon_bo_pin(bo, any_domain, NULL);
+    if (ret)
+    goto err_radeon_bo_unreserve;
+
+    ret = drm_gem_ttm_vmap(obj, map);
+    if (ret)
+    goto err_radeon_bo_unpin;
+
+    radeon_bo_unreserve(bo);
+
+    return 0;
+
+err_radeon_bo_unpin:
+    radeon_bo_unpin(bo);
+err_radeon_bo_unreserve:
+    radeon_bo_unreserve(bo);
+    return ret;
+}
+
+static void 

[PATCH V2 1/1] drm/amdgpu: only skip smc sdma sos ta and asd fw in SRIOV for navi12

2020-11-24 Thread Stanley . Yang
The KFDTopologyTest.BasicTest will failed if skip smc, sdma, sos, ta
and asd fw in SRIOV for vega10, so adjust above fw and skip load them
in SRIOV only for navi12.

v2: remove unnecessary asic type check.

Signed-off-by: Stanley.Yang 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c  |  3 ---
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c  |  3 ---
 .../gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c | 13 ++---
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c   |  2 +-
 5 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 16b551f330a4..8309dd95aa48 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -593,9 +593,6 @@ static int sdma_v4_0_init_microcode(struct amdgpu_device 
*adev)
struct amdgpu_firmware_info *info = NULL;
const struct common_firmware_header *header = NULL;
 
-   if (amdgpu_sriov_vf(adev))
-   return 0;
-
DRM_DEBUG("\n");
 
switch (adev->asic_type) {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 9c72b95b7463..fad1cc394219 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -203,7 +203,7 @@ static int sdma_v5_0_init_microcode(struct amdgpu_device 
*adev)
const struct common_firmware_header *header = NULL;
const struct sdma_firmware_header_v1_0 *hdr;
 
-   if (amdgpu_sriov_vf(adev))
+   if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_NAVI12))
return 0;
 
DRM_DEBUG("\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index cb5a6f1437f8..5ea11a0f568f 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -153,9 +153,6 @@ static int sdma_v5_2_init_microcode(struct amdgpu_device 
*adev)
struct amdgpu_firmware_info *info = NULL;
const struct common_firmware_header *header = NULL;
 
-   if (amdgpu_sriov_vf(adev))
-   return 0;
-
DRM_DEBUG("\n");
 
switch (adev->asic_type) {
diff --git a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c 
b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
index daf122f24f23..e2192d8762a4 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/smumgr/vega10_smumgr.c
@@ -208,14 +208,13 @@ static int vega10_smu_init(struct pp_hwmgr *hwmgr)
unsigned long tools_size;
int ret;
struct cgs_firmware_info info = {0};
+   struct amdgpu_device *adev = hwmgr->adev;
 
-   if (!amdgpu_sriov_vf((struct amdgpu_device *)hwmgr->adev)) {
-   ret = cgs_get_firmware_info(hwmgr->device,
-   CGS_UCODE_ID_SMU,
-   );
-   if (ret || !info.kptr)
-   return -EINVAL;
-   }
+   ret = cgs_get_firmware_info(hwmgr->device,
+   CGS_UCODE_ID_SMU,
+   );
+   if (ret || !info.kptr)
+   return -EINVAL;
 
priv = kzalloc(sizeof(struct vega10_smumgr), GFP_KERNEL);
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c 
b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index 1904df5a3e20..80c0bfaed097 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -847,7 +847,7 @@ static int smu_sw_init(void *handle)
smu->smu_dpm.dpm_level = AMD_DPM_FORCED_LEVEL_AUTO;
smu->smu_dpm.requested_dpm_level = AMD_DPM_FORCED_LEVEL_AUTO;
 
-   if (!amdgpu_sriov_vf(adev)) {
+   if (!amdgpu_sriov_vf(adev) || (adev->asic_type != CHIP_NAVI12)) {
ret = smu_init_microcode(smu);
if (ret) {
dev_err(adev->dev, "Failed to load smu firmware!\n");
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

2020-11-24 Thread Thomas Zimmermann

Hi Christian

Am 16.11.20 um 12:28 schrieb Christian König:

Am 13.11.20 um 08:59 schrieb Thomas Zimmermann:

Hi Christian

Am 12.11.20 um 18:16 schrieb Christian König:

Am 12.11.20 um 14:21 schrieb Thomas Zimmermann:

In order to avoid eviction of vmap'ed buffers, pin them in their GEM
object's vmap implementation. Unpin them in the vunmap implementation.
This is needed to make generic fbdev support work reliably. Without,
the buffer object could be evicted while fbdev flushed its shadow 
buffer.


In difference to the PRIME pin/unpin functions, the vmap code does not
modify the BOs prime_shared_count, so a vmap-pinned BO does not 
count as

shared.

The actual pin location is not important as the vmap call returns
information on how to access the buffer. Callers that require a
specific location should explicitly pin the BO before vmapping it.

Well is the buffer supposed to be scanned out?

No, not by the fbdev helper.


Ok in this case that should work.


If yes then the pin location is actually rather important since the
hardware can only scan out from VRAM.

For relocatable BOs, fbdev uses a shadow buffer that makes all any
relocation transparent to userspace. It flushes the shadow fb into the
BO's memory if there are updates. The code is in
drm_fb_helper_dirty_work(). [1] During the flush operation, the vmap
call now pins the BO to wherever it is. The actual location does not
matter. It's vunmap'ed immediately afterwards.


The problem is what happens when it is prepared for scanout, but can't 
be moved to VRAM because it is vmapped?


When the shadow is never scanned out that isn't a problem, but we need 
to keep that in mind.




I'd like ask for your suggestions before sending an update for this patch.

After the discussion about locking in fbdev, [1] I intended to replace 
the pin call with code that acquires the reservation lock.


First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but then 
wondered why ttm_bo_vmap() doe not acquire the lock internally? I'd 
expect that vmap/vunmap are close together and do not overlap for the 
same BO. Otherwise, acquiring the reservation lock would require another 
ref-counting variable or per-driver code.


Best regards
Thomas

[1] https://patchwork.freedesktop.org/patch/401088/?series=83918=1


Regards,
Christian.



For dma-buf sharing, the regular procedure of pin + vmap still apply.
This should always move the BO into GTT-managed memory.

Best regards
Thomas

[1]
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%23n432data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=RLauuAuXkcl0rXwWWJ%2FrKP%2BsCr2wAzU1ejGV1bnQ80w%3Dreserved=0 




Regards,
Christian.


Signed-off-by: Thomas Zimmermann 
---
   drivers/gpu/drm/radeon/radeon_gem.c | 51 
+++--

   1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_gem.c
b/drivers/gpu/drm/radeon/radeon_gem.c
index d2876ce3bc9e..eaf7fc9a7b07 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -226,6 +226,53 @@ static int radeon_gem_handle_lockup(struct
radeon_device *rdev, int r)
   return r;
   }
   +static int radeon_gem_object_vmap(struct drm_gem_object *obj,
struct dma_buf_map *map)
+{
+    static const uint32_t any_domain = RADEON_GEM_DOMAIN_VRAM |
+   RADEON_GEM_DOMAIN_GTT |
+   RADEON_GEM_DOMAIN_CPU;
+
+    struct radeon_bo *bo = gem_to_radeon_bo(obj);
+    int ret;
+
+    ret = radeon_bo_reserve(bo, false);
+    if (ret)
+    return ret;
+
+    /* pin buffer at its current location */
+    ret = radeon_bo_pin(bo, any_domain, NULL);
+    if (ret)
+    goto err_radeon_bo_unreserve;
+
+    ret = drm_gem_ttm_vmap(obj, map);
+    if (ret)
+    goto err_radeon_bo_unpin;
+
+    radeon_bo_unreserve(bo);
+
+    return 0;
+
+err_radeon_bo_unpin:
+    radeon_bo_unpin(bo);
+err_radeon_bo_unreserve:
+    radeon_bo_unreserve(bo);
+    return ret;
+}
+
+static void radeon_gem_object_vunmap(struct drm_gem_object *obj,
struct dma_buf_map *map)
+{
+    struct radeon_bo *bo = gem_to_radeon_bo(obj);
+    int ret;
+
+    ret = radeon_bo_reserve(bo, false);
+    if (ret)
+    return;
+
+    drm_gem_ttm_vunmap(obj, map);
+    radeon_bo_unpin(bo);
+    radeon_bo_unreserve(bo);
+}
+
   static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
   .free = radeon_gem_object_free,
   .open = radeon_gem_object_open,
@@ -234,8 +281,8 @@ static const struct drm_gem_object_funcs
radeon_gem_object_funcs = {
   .pin = radeon_gem_prime_pin,
   .unpin = radeon_gem_prime_unpin,
   .get_sg_table = 

Re: [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-24 Thread Joe Perches
On Tue, 2020-11-24 at 11:58 +1100, Finn Thain wrote:
> it's not for me to prove that such patches don't affect code 
> generation. That's for the patch author and (unfortunately) for reviewers.

Ideally, that proof would be provided by the compilation system itself
and not patch authors nor reviewers nor maintainers.

Unfortunately gcc does not guarantee repeatability or deterministic output.
To my knowledge, neither does clang.


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-24 Thread Finn Thain


On Mon, 23 Nov 2020, Miguel Ojeda wrote:

> On Mon, 23 Nov 2020, Finn Thain wrote:
> 
> > On Sun, 22 Nov 2020, Miguel Ojeda wrote:
> > 
> > > 
> > > It isn't that much effort, isn't it? Plus we need to take into 
> > > account the future mistakes that it might prevent, too.
> > 
> > We should also take into account optimisim about future improvements 
> > in tooling.
> > 
> Not sure what you mean here. There is no reliable way to guess what the 
> intention was with a missing fallthrough, even if you parsed whitespace 
> and indentation.
> 

What I meant was that you've used pessimism as if it was fact.

For example, "There is no way to guess what the effect would be if the 
compiler trained programmers to add a knee-jerk 'break' statement to avoid 
a warning".

Moreover, what I meant was that preventing programmer mistakes is a 
problem to be solved by development tools. The idea that retro-fitting new 
language constructs onto mature code is somehow necessary to "prevent 
future mistakes" is entirely questionable.

> > > So even if there were zero problems found so far, it is still a 
> > > positive change.
> > > 
> > 
> > It is if you want to spin it that way.
> > 
> How is that a "spin"? It is a fact that we won't get *implicit* 
> fallthrough mistakes anymore (in particular if we make it a hard error).
> 

Perhaps "handwaving" is a better term?

> > > I would agree if these changes were high risk, though; but they are 
> > > almost trivial.
> > > 
> > 
> > This is trivial:
> > 
> >  case 1:
> > this();
> > +   fallthrough;
> >  case 2:
> > that();
> > 
> > But what we inevitably get is changes like this:
> > 
> >  case 3:
> > this();
> > +   break;
> >  case 4:
> > hmmm();
> > 
> > Why? Mainly to silence the compiler. Also because the patch author 
> > argued successfully that they had found a theoretical bug, often in 
> > mature code.
> > 
> If someone changes control flow, that is on them. Every kernel developer 
> knows what `break` does.
> 

Sure. And if you put -Wimplicit-fallthrough into the Makefile and if that 
leads to well-intentioned patches that cause regressions, it is partly on 
you.

Have you ever considered the overall cost of the countless 
-Wpresume-incompetence flags?

Perhaps you pay the power bill for a build farm that produces logs that 
no-one reads? Perhaps you've run git bisect, knowing that the compiler 
messages are not interesting? Or compiled software in using a language 
that generates impenetrable messages? If so, here's a tip:

# grep CFLAGS /etc/portage/make.conf 
CFLAGS="... -Wno-all -Wno-extra ..."
CXXFLAGS="${CFLAGS}"

Now allow me some pessimism: the hardware upgrades, gigawatt hours and 
wait time attributable to obligatory static analyses are a net loss.

> > But is anyone keeping score of the regressions? If unreported bugs 
> > count, what about unreported regressions?
> > 
> Introducing `fallthrough` does not change semantics. If you are really 
> keen, you can always compare the objects because the generated code 
> shouldn't change.
> 

No, it's not for me to prove that such patches don't affect code 
generation. That's for the patch author and (unfortunately) for reviewers.

> Cheers,
> Miguel
> 
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 000/141] Fix fall-through warnings for Clang

2020-11-24 Thread Finn Thain


On Mon, 23 Nov 2020, Joe Perches wrote:

> On Tue, 2020-11-24 at 11:58 +1100, Finn Thain wrote:
> > it's not for me to prove that such patches don't affect code 
> > generation. That's for the patch author and (unfortunately) for 
> > reviewers.
> 
> Ideally, that proof would be provided by the compilation system itself 
> and not patch authors nor reviewers nor maintainers.
> 
> Unfortunately gcc does not guarantee repeatability or deterministic 
> output. To my knowledge, neither does clang.
> 

Yes, I've said the same thing myself. But having attempted it, I now think 
this is a hard problem. YMMV.

https://lore.kernel.org/linux-scsi/alpine.LNX.2.22.394.2004281017310.12@nippy.intranet/
https://lore.kernel.org/linux-scsi/alpine.LNX.2.22.394.2005211358460.8@nippy.intranet/
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Patch for omitted register.

2020-11-24 Thread Wolf
Small register overlooked.

0001-Added-omitted-SMUIO-v11.0.0-register-ROM_SW_SECURE.patch
Description: Binary data
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


  1   2   >