Re: [PATCH 4/8] drm/amd/powerplay: add interface for getting workload type

2019-09-25 Thread Wang, Kevin(Yang)
this patch, i think the smu driver should be unify the driver code format,
it is very useful for optimize and maintain smu driver code in the furture.

you can reference the navi10_ppt.c and arcuturs_ppt.c
funciton: arcturus_get_workload_type

Best Regards,
Kevin

From: amd-gfx  on behalf of Liang, Prike 

Sent: Thursday, September 26, 2019 11:50 AM
To: amd-gfx@lists.freedesktop.org 
Cc: Liang, Prike ; Quan, Evan ; Huang, 
Ray ; keneth.f...@amd.com 
Subject: [PATCH 4/8] drm/amd/powerplay: add interface for getting workload type

The workload type was got from the input of power profile mode.

Signed-off-by: Prike Liang 
Reviewed-by: Kevin Wang 
Reviewed-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 29 +
 1 file changed, 29 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 8ec3663..dc945b8 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -365,6 +365,34 @@ static int renoir_unforce_dpm_levels(struct smu_context 
*smu) {
 return ret;
 }

+static int renoir_get_workload_type(struct smu_context *smu, uint32_t profile)
+{
+
+   uint32_t  pplib_workload = 0;
+
+   switch (profile) {
+   case PP_SMC_POWER_PROFILE_FULLSCREEN3D:
+   pplib_workload = WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT;
+   break;
+   case PP_SMC_POWER_PROFILE_CUSTOM:
+   pplib_workload = WORKLOAD_PPLIB_COUNT;
+   break;
+   case PP_SMC_POWER_PROFILE_VIDEO:
+   pplib_workload = WORKLOAD_PPLIB_VIDEO_BIT;
+   break;
+   case PP_SMC_POWER_PROFILE_VR:
+   pplib_workload = WORKLOAD_PPLIB_VR_BIT;
+   break;
+   case PP_SMC_POWER_PROFILE_COMPUTE:
+   pplib_workload = WORKLOAD_PPLIB_COMPUTE_BIT;
+   break;
+   default:
+   return -EINVAL;
+   }
+
+   return pplib_workload;
+}
+
 static const struct pptable_funcs renoir_ppt_funcs = {
 .get_smu_msg_index = renoir_get_smu_msg_index,
 .get_smu_table_index = renoir_get_smu_table_index,
@@ -376,6 +404,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
 .dpm_set_uvd_enable = renoir_dpm_set_uvd_enable,
 .force_dpm_limit_value = renoir_force_dpm_limit_value,
 .unforce_dpm_levels = renoir_unforce_dpm_levels,
+   .get_workload_type = renoir_get_workload_type,
 };

 void renoir_set_ppt_funcs(struct smu_context *smu)
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 7/8] drm/amd/powerplay: implement the interface for setting sclk/uclk profile_peak level

2019-09-25 Thread Wang, Kevin(Yang)
this patch is not needed for apu.
by default, the smu will use max value as peak value.

refs:smu_default_set_performance_level

Best Regards,
Kevin


From: amd-gfx  on behalf of Liang, Prike 

Sent: Thursday, September 26, 2019 11:50 AM
To: amd-gfx@lists.freedesktop.org 
Cc: Liang, Prike ; Quan, Evan ; Huang, 
Ray ; keneth.f...@amd.com 
Subject: [PATCH 7/8] drm/amd/powerplay: implement the interface for setting 
sclk/uclk profile_peak level

Add the interface for setting sclk and uclk peak frequency.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 40 ++
 1 file changed, 40 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 151d78e..c63518a 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -510,6 +510,45 @@ static int renoir_set_power_profile_mode(struct 
smu_context *smu, long *input, u
 return 0;
 }

+static int renoir_set_peak_clock_by_device(struct smu_context *smu)
+{
+   int ret = 0;
+   uint32_t sclk_freq = 0, uclk_freq = 0;
+
+   ret = smu_get_dpm_freq_range(smu, SMU_SCLK, NULL, &sclk_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_set_soft_freq_range(smu, SMU_SCLK, sclk_freq, sclk_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_get_dpm_freq_range(smu, SMU_UCLK, NULL, &uclk_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_set_soft_freq_range(smu, SMU_UCLK, uclk_freq, uclk_freq);
+   if (ret)
+   return ret;
+
+   return ret;
+}
+
+static int renoir_set_performance_level(struct smu_context *smu, enum 
amd_dpm_forced_level level)
+{
+   int ret = 0;
+
+   switch (level) {
+   case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
+   ret = renoir_set_peak_clock_by_device(smu);
+   break;
+   default:
+   ret = -EINVAL;
+   break;
+   }
+
+   return ret;
+}

 static const struct pptable_funcs renoir_ppt_funcs = {
 .get_smu_msg_index = renoir_get_smu_msg_index,
@@ -526,6 +565,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
 .get_profiling_clk_mask = renoir_get_profiling_clk_mask,
 .force_clk_levels = renoir_force_clk_levels,
 .set_power_profile_mode = renoir_set_power_profile_mode,
+   .set_performance_level = renoir_set_performance_level,
 };

 void renoir_set_ppt_funcs(struct smu_context *smu)
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 1/8] drm/amd/powerplay: bypass dpm_context null pointer check guard for some smu series

2019-09-25 Thread Wang, Kevin(Yang)
comment inline.

From: amd-gfx  on behalf of Liang, Prike 

Sent: Thursday, September 26, 2019 11:50 AM
To: amd-gfx@lists.freedesktop.org 
Cc: Liang, Prike ; Quan, Evan ; Huang, 
Ray ; keneth.f...@amd.com 
Subject: [PATCH 1/8] drm/amd/powerplay: bypass dpm_context null pointer check 
guard for some smu series

For now APU has no smu_dpm_context structure for containing default/current 
related dpm tables,
thus will needn't initialize smu_dpm_context to aviod APU null pointer issue.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 7 ---
 drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h | 1 +
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 1 +
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 23293e1..ae4a82e 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -1557,7 +1557,8 @@ static int smu_enable_umd_pstate(void *handle,

 struct smu_context *smu = (struct smu_context*)(handle);
 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
-   if (!smu->pm_enabled || !smu_dpm_ctx->dpm_context)
+
+   if (!smu->is_apu && (!smu->pm_enabled || !smu_dpm_ctx->dpm_context))
 return -EINVAL;

 if (!(smu_dpm_ctx->dpm_level & profile_mode_mask)) {
@@ -1755,7 +1756,7 @@ enum amd_dpm_forced_level 
smu_get_performance_level(struct smu_context *smu)
 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
 enum amd_dpm_forced_level level;

-   if (!smu_dpm_ctx->dpm_context)
+   if (!smu->is_apu && !smu_dpm_ctx->dpm_context)
 return -EINVAL;

 mutex_lock(&(smu->mutex));
@@ -1770,7 +1771,7 @@ int smu_force_performance_level(struct smu_context *smu, 
enum amd_dpm_forced_lev
 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
 int ret = 0;

-   if (!smu_dpm_ctx->dpm_context)
+   if (!smu->is_apu && !smu_dpm_ctx->dpm_context)
 return -EINVAL;

 ret = smu_enable_umd_pstate(smu, &level);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h 
b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 5c89844..bd1e621 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -387,6 +387,7 @@ struct smu_context
 uint32_t power_profile_mode;
 uint32_t default_power_profile_mode;
 bool pm_enabled;
+   bool is_apu;

 uint32_t smc_if_version;

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 9311b6a..a4e44d3 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -141,6 +141,7 @@ static int renoir_get_smu_table_index(struct smu_context 
*smc, uint32_t index)
 static int renoir_tables_init(struct smu_context *smu, struct smu_table 
*tables)
 {
 struct smu_table_context *smu_table = &smu->smu_table;
+   smu->is_apu = true;
[keivn]:
i'd like move this into function of "renoir_set_ppt_funcs".
and this member should be set default value in amdgpu_smu.c

after fixed:
Reviewed-by: Kevin Wang 

 SMU_TABLE_INIT(tables, SMU_TABLE_WATERMARKS, sizeof(Watermarks_t),
 PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 8/8] drm/amd/powerplay: update the interface for getting dpm full scale clock frequency

2019-09-25 Thread Liang, Prike
Update get_dpm_uclk_limited to get more clock type full scale dpm frequency.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h |  7 ---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 16 ++-
 drivers/gpu/drm/amd/powerplay/smu_v12_0.c  | 28 ++
 3 files changed, 34 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h 
b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index b118863..eb81edd 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -465,7 +465,8 @@ struct pptable_funcs {
int (*display_disable_memory_clock_switch)(struct smu_context *smu, 
bool disable_memory_clock_switch);
void (*dump_pptable)(struct smu_context *smu);
int (*get_power_limit)(struct smu_context *smu, uint32_t *limit, bool 
asic_default);
-   int (*get_dpm_uclk_limited)(struct smu_context *smu, uint32_t *clock, 
bool max);
+   int (*get_dpm_clk_limited)(struct smu_context *smu, enum smu_clk_type 
clk_type,
+  uint32_t dpm_level, uint32_t *freq);
 };
 
 struct smu_funcs
@@ -774,8 +775,8 @@ struct smu_funcs
((smu)->ppt_funcs->set_performance_level? 
(smu)->ppt_funcs->set_performance_level((smu), (level)) : -EINVAL);
 #define smu_dump_pptable(smu) \
((smu)->ppt_funcs->dump_pptable ? (smu)->ppt_funcs->dump_pptable((smu)) 
: 0)
-#define smu_get_dpm_uclk_limited(smu, clock, max) \
-   ((smu)->ppt_funcs->get_dpm_uclk_limited ? 
(smu)->ppt_funcs->get_dpm_uclk_limited((smu), (clock), (max)) : -EINVAL)
+#define smu_get_dpm_clk_limited(smu, clk_type, dpm_level, freq) \
+   ((smu)->ppt_funcs->get_dpm_clk_limited ? 
(smu)->ppt_funcs->get_dpm_clk_limited((smu), (clk_type), (dpm_level), (freq)) : 
-EINVAL)
 
 #define smu_set_soft_freq_limited_range(smu, clk_type, min, max) \
((smu)->funcs->set_soft_freq_limited_range ? 
(smu)->funcs->set_soft_freq_limited_range((smu), (clk_type), (min), (max)) : 
-EINVAL)
diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index c63518a..556ca68 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -161,21 +161,17 @@ static int renoir_tables_init(struct smu_context *smu, 
struct smu_table *tables)
  * This interface just for getting uclk ultimate freq and should't introduce
  * other likewise function result in overmuch callback.
  */
-static int renoir_get_dpm_uclk_limited(struct smu_context *smu, uint32_t 
*clock, bool max)
+static int renoir_get_dpm_clk_limited(struct smu_context *smu, enum 
smu_clk_type clk_type,
+   uint32_t dpm_level, uint32_t 
*freq)
 {
+   DpmClocks_t *clk_table = smu->smu_table.clocks_table;
 
-   DpmClocks_t *table = smu->smu_table.clocks_table;
-
-   if (!clock || !table)
+   if (!clk_table || clk_type >= SMU_CLK_COUNT)
return -EINVAL;
 
-   if (max)
-   *clock = table->FClocks[NUM_FCLK_DPM_LEVELS-1].Freq;
-   else
-   *clock = table->FClocks[0].Freq;
+   GET_DPM_CUR_FREQ(clk_table, clk_type, dpm_level, *freq);
 
return 0;
-
 }
 
 static int renoir_print_clk_levels(struct smu_context *smu,
@@ -555,7 +551,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
.get_smu_table_index = renoir_get_smu_table_index,
.tables_init = renoir_tables_init,
.set_power_state = NULL,
-   .get_dpm_uclk_limited = renoir_get_dpm_uclk_limited,
+   .get_dpm_clk_limited = renoir_get_dpm_clk_limited,
.print_clk_levels = renoir_print_clk_levels,
.get_current_power_state = renoir_get_current_power_state,
.dpm_set_uvd_enable = renoir_dpm_set_uvd_enable,
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v12_0.c 
b/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
index c0c0a60..c78c69d 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
@@ -323,10 +323,18 @@ static int smu_v12_0_get_dpm_ultimate_freq(struct 
smu_context *smu, enum smu_clk
 uint32_t *min, uint32_t *max)
 {
int ret = 0;
+   uint32_t mclk_mask, soc_mask;
 
mutex_lock(&smu->mutex);
 
if (max) {
+   ret = smu_get_profiling_clk_mask(smu, 
AMD_DPM_FORCED_LEVEL_PROFILE_PEAK,
+NULL,
+&mclk_mask,
+&soc_mask);
+   if (ret)
+   goto failed;
+
switch (clk_type) {
case SMU_GFXCLK:
case SMU_SCLK:
@@ -340,14 +348,20 @@ static int smu_v12_0_get_dpm_ultimate_freq(struct 
smu_context *smu, enum smu_clk
goto failed;
   

[PATCH 2/8] drm/amd/powerplay: implement the interface for setting soft freq range

2019-09-25 Thread Liang, Prike
The APU soft freq range set by different way from DGPU, thus need implement
the function respectively base on each common SMU part.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 25 +---
 drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h |  3 ++
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c  | 30 +++
 drivers/gpu/drm/amd/powerplay/smu_v12_0.c  | 53 ++
 4 files changed, 88 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index ae4a82e..25d0014 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -205,8 +205,7 @@ int smu_get_smc_version(struct smu_context *smu, uint32_t 
*if_version, uint32_t
 int smu_set_soft_freq_range(struct smu_context *smu, enum smu_clk_type 
clk_type,
uint32_t min, uint32_t max)
 {
-   int ret = 0, clk_id = 0;
-   uint32_t param;
+   int ret = 0;
 
if (min <= 0 && max <= 0)
return -EINVAL;
@@ -214,27 +213,7 @@ int smu_set_soft_freq_range(struct smu_context *smu, enum 
smu_clk_type clk_type,
if (!smu_clk_dpm_is_enabled(smu, clk_type))
return 0;
 
-   clk_id = smu_clk_get_index(smu, clk_type);
-   if (clk_id < 0)
-   return clk_id;
-
-   if (max > 0) {
-   param = (uint32_t)((clk_id << 16) | (max & 0x));
-   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxByFreq,
- param);
-   if (ret)
-   return ret;
-   }
-
-   if (min > 0) {
-   param = (uint32_t)((clk_id << 16) | (min & 0x));
-   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMinByFreq,
- param);
-   if (ret)
-   return ret;
-   }
-
-
+   ret = smu_set_soft_freq_limited_range(smu, clk_type, min, max);
return ret;
 }
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h 
b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index bd1e621..b118863 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -547,6 +547,7 @@ struct smu_funcs
int (*baco_reset)(struct smu_context *smu);
int (*mode2_reset)(struct smu_context *smu);
int (*get_dpm_ultimate_freq)(struct smu_context *smu, enum smu_clk_type 
clk_type, uint32_t *min, uint32_t *max);
+   int (*set_soft_freq_limited_range)(struct smu_context *smu, enum 
smu_clk_type clk_type, uint32_t min, uint32_t max);
 };
 
 #define smu_init_microcode(smu) \
@@ -776,6 +777,8 @@ struct smu_funcs
 #define smu_get_dpm_uclk_limited(smu, clock, max) \
((smu)->ppt_funcs->get_dpm_uclk_limited ? 
(smu)->ppt_funcs->get_dpm_uclk_limited((smu), (clock), (max)) : -EINVAL)
 
+#define smu_set_soft_freq_limited_range(smu, clk_type, min, max) \
+   ((smu)->funcs->set_soft_freq_limited_range ? 
(smu)->funcs->set_soft_freq_limited_range((smu), (clk_type), (min), (max)) : 
-EINVAL)
 
 extern int smu_get_atom_data_table(struct smu_context *smu, uint32_t table,
   uint16_t *size, uint8_t *frev, uint8_t *crev,
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c 
b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
index d50e0d0..c9e90d5 100644
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
@@ -1763,6 +1763,35 @@ static int smu_v11_0_get_dpm_ultimate_freq(struct 
smu_context *smu, enum smu_clk
return ret;
 }
 
+static int smu_v11_0_set_soft_freq_limited_range(struct smu_context *smu, enum 
smu_clk_type clk_type,
+   uint32_t min, uint32_t max)
+{
+   int ret = 0, clk_id = 0;
+   uint32_t param;
+
+   clk_id = smu_clk_get_index(smu, clk_type);
+   if (clk_id < 0)
+   return clk_id;
+
+   if (max > 0) {
+   param = (uint32_t)((clk_id << 16) | (max & 0x));
+   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxByFreq,
+ param);
+   if (ret)
+   return ret;
+   }
+
+   if (min > 0) {
+   param = (uint32_t)((clk_id << 16) | (min & 0x));
+   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMinByFreq,
+ param);
+   if (ret)
+   return ret;
+   }
+
+   return ret;
+}
+
 static const struct smu_funcs smu_v11_0_funcs = {
.init_microcode = smu_v11_0_init_microcode,
.load_microcode = smu_v11_0_load_microcode,
@@ -1814,6 +1843,7 @@ static const struct smu_funcs smu_v11_0_funcs = {
.baco_set_state = smu_v11_0_baco_set_state,
.baco_res

[PATCH 3/8] drm/amd/powerplay: add interface for forcing and unforcing dpm limit value

2019-09-25 Thread Liang, Prike
That's base function for forcing and unforcing dpm limit value.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 62 ++
 1 file changed, 62 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index a4e44d3..8ec3663 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -305,6 +305,66 @@ static int renoir_dpm_set_uvd_enable(struct smu_context 
*smu, bool enable)
return ret;
 }
 
+static int renoir_force_dpm_limit_value(struct smu_context *smu, bool highest)
+{
+   int ret = 0, i = 0;
+   uint32_t min_freq, max_freq, force_freq;
+   enum smu_clk_type clk_type;
+
+   enum smu_clk_type clks[] = {
+   SMU_GFXCLK,
+   SMU_MCLK,
+   SMU_SOCCLK,
+   };
+
+   for (i = 0; i < ARRAY_SIZE(clks); i++) {
+   clk_type = clks[i];
+   ret = smu_get_dpm_freq_range(smu, clk_type, &min_freq, 
&max_freq);
+   if (ret)
+   return ret;
+
+   force_freq = highest ? max_freq : min_freq;
+   ret = smu_set_soft_freq_range(smu, clk_type, force_freq, 
force_freq);
+   if (ret)
+   return ret;
+   }
+
+   return ret;
+}
+
+static int renoir_unforce_dpm_levels(struct smu_context *smu) {
+
+   int ret = 0, i = 0;
+   uint32_t min_freq, max_freq;
+   enum smu_clk_type clk_type;
+
+   struct clk_feature_map {
+   enum smu_clk_type clk_type;
+   uint32_tfeature;
+   } clk_feature_map[] = {
+   {SMU_GFXCLK, SMU_FEATURE_DPM_GFXCLK_BIT},
+   {SMU_MCLK,   SMU_FEATURE_DPM_UCLK_BIT},
+   {SMU_SOCCLK, SMU_FEATURE_DPM_SOCCLK_BIT},
+   };
+
+   for (i = 0; i < ARRAY_SIZE(clk_feature_map); i++) {
+   if (!smu_feature_is_enabled(smu, clk_feature_map[i].feature))
+   continue;
+
+   clk_type = clk_feature_map[i].clk_type;
+
+   ret = smu_get_dpm_freq_range(smu, clk_type, &min_freq, 
&max_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_set_soft_freq_range(smu, clk_type, min_freq, 
max_freq);
+   if (ret)
+   return ret;
+   }
+
+   return ret;
+}
+
 static const struct pptable_funcs renoir_ppt_funcs = {
.get_smu_msg_index = renoir_get_smu_msg_index,
.get_smu_table_index = renoir_get_smu_table_index,
@@ -314,6 +374,8 @@ static const struct pptable_funcs renoir_ppt_funcs = {
.print_clk_levels = renoir_print_clk_levels,
.get_current_power_state = renoir_get_current_power_state,
.dpm_set_uvd_enable = renoir_dpm_set_uvd_enable,
+   .force_dpm_limit_value = renoir_force_dpm_limit_value,
+   .unforce_dpm_levels = renoir_unforce_dpm_levels,
 };
 
 void renoir_set_ppt_funcs(struct smu_context *smu)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 6/8] drm/amd/powerplay: implement interface set_power_profile_mode()

2019-09-25 Thread Liang, Prike
Add set_power_profile_mode() for none manual dpm level case setting power 
profile mode.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 4f2b750..151d78e 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -482,6 +482,35 @@ static int renoir_force_clk_levels(struct smu_context *smu,
return ret;
 }
 
+static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, 
uint32_t size)
+{
+   int workload_type, ret;
+
+   smu->power_profile_mode = input[size];
+
+   if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+   pr_err("Invalid power profile mode %d\n", 
smu->power_profile_mode);
+   return -EINVAL;
+   }
+
+   /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+   workload_type = smu_workload_get_type(smu, smu->power_profile_mode);
+   if (workload_type < 0) {
+   pr_err("Unsupported power profile mode %d on 
RENOIR\n",smu->power_profile_mode);
+   return -EINVAL;
+   }
+
+   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+   1 << workload_type);
+   if (ret) {
+   pr_err("Fail to set workload type %d\n", workload_type);
+   return ret;
+   }
+
+   return 0;
+}
+
+
 static const struct pptable_funcs renoir_ppt_funcs = {
.get_smu_msg_index = renoir_get_smu_msg_index,
.get_smu_table_index = renoir_get_smu_table_index,
@@ -496,6 +525,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
.get_workload_type = renoir_get_workload_type,
.get_profiling_clk_mask = renoir_get_profiling_clk_mask,
.force_clk_levels = renoir_force_clk_levels,
+   .set_power_profile_mode = renoir_set_power_profile_mode,
 };
 
 void renoir_set_ppt_funcs(struct smu_context *smu)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 5/8] drm/amd/powerplay: add the interfaces for getting and setting profiling dpm clock level

2019-09-25 Thread Liang, Prike
implement get_profiling_clk_mask and force_clk_levels for forcing dpm clk to 
limit value.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 91 ++
 1 file changed, 91 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index dc945b8..4f2b750 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -393,6 +393,95 @@ static int renoir_get_workload_type(struct smu_context 
*smu, uint32_t profile)
return pplib_workload;
 }
 
+static int renoir_get_profiling_clk_mask(struct smu_context *smu,
+enum amd_dpm_forced_level level,
+uint32_t *sclk_mask,
+uint32_t *mclk_mask,
+uint32_t *soc_mask)
+{
+
+   if (level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
+   if (sclk_mask)
+   *sclk_mask = 0;
+   } else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) {
+   if (mclk_mask)
+   *mclk_mask = 0;
+   } else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) {
+   if(sclk_mask)
+   /* The sclk as gfxclk and has three level about 
max/min/current */
+   *sclk_mask = 3 - 1;
+
+   if(mclk_mask)
+   *mclk_mask = NUM_MEMCLK_DPM_LEVELS - 1;
+
+   if(soc_mask)
+   *soc_mask = NUM_SOCCLK_DPM_LEVELS - 1;
+   }
+
+   return 0;
+}
+
+static int renoir_force_clk_levels(struct smu_context *smu,
+  enum smu_clk_type clk_type, uint32_t mask)
+{
+
+   int ret = 0 ;
+   uint32_t soft_min_level = 0, soft_max_level = 0, min_freq = 0, max_freq 
= 0;
+   DpmClocks_t *clk_table = smu->smu_table.clocks_table;
+
+   soft_min_level = mask ? (ffs(mask) - 1) : 0;
+   soft_max_level = mask ? (fls(mask) - 1) : 0;
+
+   switch (clk_type) {
+   case SMU_GFXCLK:
+   case SMU_SCLK:
+   if (soft_min_level > 2 || soft_max_level > 2) {
+   pr_info("Currently sclk only support 3 levels on 
APU\n");
+   return -EINVAL;
+   }
+
+   ret = smu_get_dpm_freq_range(smu, SMU_GFXCLK, &min_freq, 
&max_freq);
+   if (ret)
+   return ret;
+   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxGfxClk,
+   soft_max_level == 0 ? min_freq :
+   soft_max_level == 1 ? 
RENOIR_UMD_PSTATE_GFXCLK : max_freq);
+   if (ret)
+   return ret;
+   ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetHardMinGfxClk,
+   soft_min_level == 2 ? max_freq :
+   soft_min_level == 1 ? 
RENOIR_UMD_PSTATE_GFXCLK : min_freq);
+   if (ret)
+   return ret;
+   break;
+   case SMU_SOCCLK:
+   GET_DPM_CUR_FREQ(clk_table, clk_type, soft_min_level, min_freq);
+   GET_DPM_CUR_FREQ(clk_table, clk_type, soft_max_level, max_freq);
+   ret = smu_send_smc_msg_with_param(smu, 
SMU_MSG_SetSoftMaxSocclkByFreq, max_freq);
+   if (ret)
+   return ret;
+   ret = smu_send_smc_msg_with_param(smu, 
SMU_MSG_SetHardMinSocclkByFreq, min_freq);
+   if (ret)
+   return ret;
+   break;
+   case SMU_MCLK:
+   case SMU_FCLK:
+   GET_DPM_CUR_FREQ(clk_table, clk_type, soft_min_level, min_freq);
+   GET_DPM_CUR_FREQ(clk_table, clk_type, soft_max_level, max_freq);
+   ret = smu_send_smc_msg_with_param(smu, 
SMU_MSG_SetSoftMaxFclkByFreq, max_freq);
+   if (ret)
+   return ret;
+   ret = smu_send_smc_msg_with_param(smu, 
SMU_MSG_SetHardMinFclkByFreq, min_freq);
+   if (ret)
+   return ret;
+   break;
+   default:
+   break;
+   }
+
+   return ret;
+}
+
 static const struct pptable_funcs renoir_ppt_funcs = {
.get_smu_msg_index = renoir_get_smu_msg_index,
.get_smu_table_index = renoir_get_smu_table_index,
@@ -405,6 +494,8 @@ static const struct pptable_funcs renoir_ppt_funcs = {
.force_dpm_limit_value = renoir_force_dpm_limit_value,
.unforce_dpm_levels = renoir_unforce_dpm_levels,
.get_workload_type = renoir_get_workload_type,
+   .get_profiling_clk_mask = renoir_get_profiling_clk_mask,
+   .force_clk_levels = renoir_force_clk_levels,
 };
 
 void renoir_set_ppt_funcs(struct smu_context *smu)
-- 
2.7.4

___
amd-gfx mailing

[PATCH 7/8] drm/amd/powerplay: implement the interface for setting sclk/uclk profile_peak level

2019-09-25 Thread Liang, Prike
Add the interface for setting sclk and uclk peak frequency.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 40 ++
 1 file changed, 40 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 151d78e..c63518a 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -510,6 +510,45 @@ static int renoir_set_power_profile_mode(struct 
smu_context *smu, long *input, u
return 0;
 }
 
+static int renoir_set_peak_clock_by_device(struct smu_context *smu)
+{
+   int ret = 0;
+   uint32_t sclk_freq = 0, uclk_freq = 0;
+
+   ret = smu_get_dpm_freq_range(smu, SMU_SCLK, NULL, &sclk_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_set_soft_freq_range(smu, SMU_SCLK, sclk_freq, sclk_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_get_dpm_freq_range(smu, SMU_UCLK, NULL, &uclk_freq);
+   if (ret)
+   return ret;
+
+   ret = smu_set_soft_freq_range(smu, SMU_UCLK, uclk_freq, uclk_freq);
+   if (ret)
+   return ret;
+
+   return ret;
+}
+
+static int renoir_set_performance_level(struct smu_context *smu, enum 
amd_dpm_forced_level level)
+{
+   int ret = 0;
+
+   switch (level) {
+   case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
+   ret = renoir_set_peak_clock_by_device(smu);
+   break;
+   default:
+   ret = -EINVAL;
+   break;
+   }
+
+   return ret;
+}
 
 static const struct pptable_funcs renoir_ppt_funcs = {
.get_smu_msg_index = renoir_get_smu_msg_index,
@@ -526,6 +565,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
.get_profiling_clk_mask = renoir_get_profiling_clk_mask,
.force_clk_levels = renoir_force_clk_levels,
.set_power_profile_mode = renoir_set_power_profile_mode,
+   .set_performance_level = renoir_set_performance_level,
 };
 
 void renoir_set_ppt_funcs(struct smu_context *smu)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 1/8] drm/amd/powerplay: bypass dpm_context null pointer check guard for some smu series

2019-09-25 Thread Liang, Prike
For now APU has no smu_dpm_context structure for containing default/current 
related dpm tables,
thus will needn't initialize smu_dpm_context to aviod APU null pointer issue.

Signed-off-by: Prike Liang 
---
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 7 ---
 drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h | 1 +
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 1 +
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c 
b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
index 23293e1..ae4a82e 100644
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
@@ -1557,7 +1557,8 @@ static int smu_enable_umd_pstate(void *handle,
 
struct smu_context *smu = (struct smu_context*)(handle);
struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
-   if (!smu->pm_enabled || !smu_dpm_ctx->dpm_context)
+
+   if (!smu->is_apu && (!smu->pm_enabled || !smu_dpm_ctx->dpm_context))
return -EINVAL;
 
if (!(smu_dpm_ctx->dpm_level & profile_mode_mask)) {
@@ -1755,7 +1756,7 @@ enum amd_dpm_forced_level 
smu_get_performance_level(struct smu_context *smu)
struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
enum amd_dpm_forced_level level;
 
-   if (!smu_dpm_ctx->dpm_context)
+   if (!smu->is_apu && !smu_dpm_ctx->dpm_context)
return -EINVAL;
 
mutex_lock(&(smu->mutex));
@@ -1770,7 +1771,7 @@ int smu_force_performance_level(struct smu_context *smu, 
enum amd_dpm_forced_lev
struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
int ret = 0;
 
-   if (!smu_dpm_ctx->dpm_context)
+   if (!smu->is_apu && !smu_dpm_ctx->dpm_context)
return -EINVAL;
 
ret = smu_enable_umd_pstate(smu, &level);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h 
b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
index 5c89844..bd1e621 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
@@ -387,6 +387,7 @@ struct smu_context
uint32_t power_profile_mode;
uint32_t default_power_profile_mode;
bool pm_enabled;
+   bool is_apu;
 
uint32_t smc_if_version;
 
diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 9311b6a..a4e44d3 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -141,6 +141,7 @@ static int renoir_get_smu_table_index(struct smu_context 
*smc, uint32_t index)
 static int renoir_tables_init(struct smu_context *smu, struct smu_table 
*tables)
 {
struct smu_table_context *smu_table = &smu->smu_table;
+   smu->is_apu = true;
 
SMU_TABLE_INIT(tables, SMU_TABLE_WATERMARKS, sizeof(Watermarks_t),
PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 4/8] drm/amd/powerplay: add interface for getting workload type

2019-09-25 Thread Liang, Prike
The workload type was got from the input of power profile mode.

Signed-off-by: Prike Liang 
Reviewed-by: Kevin Wang 
Reviewed-by: Evan Quan 
---
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c | 29 +
 1 file changed, 29 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c 
b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
index 8ec3663..dc945b8 100644
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
@@ -365,6 +365,34 @@ static int renoir_unforce_dpm_levels(struct smu_context 
*smu) {
return ret;
 }
 
+static int renoir_get_workload_type(struct smu_context *smu, uint32_t profile)
+{
+
+   uint32_t  pplib_workload = 0;
+
+   switch (profile) {
+   case PP_SMC_POWER_PROFILE_FULLSCREEN3D:
+   pplib_workload = WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT;
+   break;
+   case PP_SMC_POWER_PROFILE_CUSTOM:
+   pplib_workload = WORKLOAD_PPLIB_COUNT;
+   break;
+   case PP_SMC_POWER_PROFILE_VIDEO:
+   pplib_workload = WORKLOAD_PPLIB_VIDEO_BIT;
+   break;
+   case PP_SMC_POWER_PROFILE_VR:
+   pplib_workload = WORKLOAD_PPLIB_VR_BIT;
+   break;
+   case PP_SMC_POWER_PROFILE_COMPUTE:
+   pplib_workload = WORKLOAD_PPLIB_COMPUTE_BIT;
+   break;
+   default:
+   return -EINVAL;
+   }
+
+   return pplib_workload;
+}
+
 static const struct pptable_funcs renoir_ppt_funcs = {
.get_smu_msg_index = renoir_get_smu_msg_index,
.get_smu_table_index = renoir_get_smu_table_index,
@@ -376,6 +404,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
.dpm_set_uvd_enable = renoir_dpm_set_uvd_enable,
.force_dpm_limit_value = renoir_force_dpm_limit_value,
.unforce_dpm_levels = renoir_unforce_dpm_levels,
+   .get_workload_type = renoir_get_workload_type,
 };
 
 void renoir_set_ppt_funcs(struct smu_context *smu)
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdkfd: Fix race in gfx10 context restore handler

2019-09-25 Thread Zhao, Yong
Reviewed-by: Yong Zhao 

From: amd-gfx  on behalf of Cornwall, 
Jay 
Sent: Wednesday, September 25, 2019 6:06 PM
To: amd-gfx@lists.freedesktop.org 
Cc: Cornwall, Jay 
Subject: [PATCH] drm/amdkfd: Fix race in gfx10 context restore handler

Missing synchronization with VGPR restore leads to intermittent
VGPR trashing in the user shader.

Signed-off-by: Jay Cornwall 
---
 drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h | 139 +++--
 .../gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm |   1 +
 2 files changed, 71 insertions(+), 69 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h 
b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
index 901fe35..d3400da 100644
--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
@@ -905,7 +905,7 @@ static const uint32_t cwsr_trap_gfx10_hex[] = {
 0x7a5d, 0x807c817c,
 0x807aff7a, 0x0080,
 0xbf0a717c, 0xbf85fff8,
-   0xbf820141, 0xbef4037e,
+   0xbf820142, 0xbef4037e,
 0x8775ff7f, 0x,
 0x8875ff75, 0x0004,
 0xbef60380, 0xbef703ff,
@@ -967,7 +967,7 @@ static const uint32_t cwsr_trap_gfx10_hex[] = {
 0x725d, 0xe0304080,
 0x725d0100, 0xe0304100,
 0x725d0200, 0xe0304180,
-   0x725d0300, 0xbf820031,
+   0x725d0300, 0xbf820032,
 0xbef603ff, 0x0100,
 0xbef20378, 0x8078ff78,
 0x0400, 0xbefc0384,
@@ -992,83 +992,84 @@ static const uint32_t cwsr_trap_gfx10_hex[] = {
 0x725d, 0xe0304100,
 0x725d0100, 0xe0304200,
 0x725d0200, 0xe0304300,
-   0x725d0300, 0xb9782a05,
-   0x80788178, 0x907c9973,
-   0x877c817c, 0xbf06817c,
-   0xbf850002, 0x8f788978,
-   0xbf820001, 0x8f788a78,
-   0xb9721e06, 0x8f728a72,
-   0x80787278, 0x8078ff78,
-   0x0200, 0x80f8ff78,
-   0x0050, 0xbef603ff,
-   0x0100, 0xbefc03ff,
-   0x006c, 0x80f89078,
-   0xf429003a, 0xf000,
-   0xbf8cc07f, 0x80fc847c,
-   0xbf80, 0xbe803100,
-   0xbe823102, 0x80f8a078,
-   0xf42d003a, 0xf000,
-   0xbf8cc07f, 0x80fc887c,
-   0xbf80, 0xbe803100,
-   0xbe823102, 0xbe843104,
-   0xbe863106, 0x80f8c078,
-   0xf431003a, 0xf000,
-   0xbf8cc07f, 0x80fc907c,
-   0xbf80, 0xbe803100,
-   0xbe823102, 0xbe843104,
-   0xbe863106, 0xbe883108,
-   0xbe8a310a, 0xbe8c310c,
-   0xbe8e310e, 0xbf06807c,
-   0xbf84fff0, 0xb9782a05,
-   0x80788178, 0x907c9973,
-   0x877c817c, 0xbf06817c,
-   0xbf850002, 0x8f788978,
-   0xbf820001, 0x8f788a78,
-   0xb9721e06, 0x8f728a72,
-   0x80787278, 0x8078ff78,
-   0x0200, 0xbef603ff,
-   0x0100, 0xf4211bfa,
+   0x725d0300, 0xbf8c3f70,
+   0xb9782a05, 0x80788178,
+   0x907c9973, 0x877c817c,
+   0xbf06817c, 0xbf850002,
+   0x8f788978, 0xbf820001,
+   0x8f788a78, 0xb9721e06,
+   0x8f728a72, 0x80787278,
+   0x8078ff78, 0x0200,
+   0x80f8ff78, 0x0050,
+   0xbef603ff, 0x0100,
+   0xbefc03ff, 0x006c,
+   0x80f89078, 0xf429003a,
+   0xf000, 0xbf8cc07f,
+   0x80fc847c, 0xbf80,
+   0xbe803100, 0xbe823102,
+   0x80f8a078, 0xf42d003a,
+   0xf000, 0xbf8cc07f,
+   0x80fc887c, 0xbf80,
+   0xbe803100, 0xbe823102,
+   0xbe843104, 0xbe863106,
+   0x80f8c078, 0xf431003a,
+   0xf000, 0xbf8cc07f,
+   0x80fc907c, 0xbf80,
+   0xbe803100, 0xbe823102,
+   0xbe843104, 0xbe863106,
+   0xbe883108, 0xbe8a310a,
+   0xbe8c310c, 0xbe8e310e,
+   0xbf06807c, 0xbf84fff0,
+   0xb9782a05, 0x80788178,
+   0x907c9973, 0x877c817c,
+   0xbf06817c, 0xbf850002,
+   0x8f788978, 0xbf820001,
+   0x8f788a78, 0xb9721e06,
+   0x8f728a72, 0x80787278,
+   0x8078ff78, 0x0200,
+   0xbef603ff, 0x0100,
+   0xf4211bfa, 0xf000,
+   0x80788478, 0xf4211b3a,
 0xf000, 0x80788478,
-   0xf4211b3a, 0xf000,
-   0x80788478, 0xf4211b7a,
+   0xf4211b7a, 0xf000,
+   0x80788478, 0xf4211eba,
 0xf000, 0x80788478,
-   0xf4211eba, 0xf000,
-   0x80788478, 0xf4211efa,
+   0xf4211efa, 0xf000,
+   0x80788478, 0xf4211c3a,
 0xf000, 0x80788478,
-   0xf4211c3a, 0xf000,
-   0x80788478, 0xf4211c7a,
+   0xf4211c7a, 0xf000,
+   0x80788478, 0xf4211e7a,
 0xf000, 0x80788478,
-   0xf4211e7a, 0xf000,
-   0x80788478, 0xf4211cfa,
+   0xf4211cfa, 0xf000,
+   0x80788478, 0xf4211bba,
 0xf000, 0x80788478,
+   0xbf8cc07f, 0xb9eef814,
 0xf4211bba, 0xf000,
 0x80788478, 0xbf8cc07f,
-   0xb9eef814, 0xf4211bba,
-   0xf000, 0x80788478,
-   0xbf8cc07f, 0xb9eef815,
-   0xbef2036d, 0x876dff72,
-   0x, 0xbefc036f,
-   0xbefe037a, 0xbef

Re: [PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

2019-09-25 Thread Grodzovsky, Andrey
AFAIK the functionality at the I2C driver leave should be identical so 
something like registers instance offset given on init should be enough. 
Obviously this also requires modified registers accessor wich includes the 
offset.

Andrey


From: Russell, Kent 
Sent: 25 September 2019 17:21:50
To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

That's the fun part 😉 Working on adding checks for which chip is requested for 
shared functionality, or separate functions for separated functionality.

 Kent

-Original Message-
From: Grodzovsky, Andrey 
Sent: Wednesday, September 25, 2019 2:11 PM
To: Russell, Kent ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

Reviewed-by: Andrey Grodzovsky 

How are you planning to use them given the hard coded use of CSKVII2C (instance 
zero I2C engine) in I2C controller code ?

Andrey

On 9/25/19 5:03 PM, Russell, Kent wrote:
> These are the offsets for CKSVII2C1, and match up with the values
> already added for CKSVII2C
>
> v2: Don't remove some of the CSKVII2C values
>
> Change-Id: I5ed88bb31253ccaf4ed4ae6f4959040c0da2f6d0
> Signed-off-by: Kent Russell 
> ---
>   .../asic_reg/smuio/smuio_11_0_0_offset.h  |  92 +
>   .../asic_reg/smuio/smuio_11_0_0_sh_mask.h | 176 ++
>   2 files changed, 268 insertions(+)
>
> diff --git
> a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> index d3876052562b..687d6843c258 100644
> --- a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> +++ b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> @@ -121,6 +121,98 @@
>   #define mmCKSVII2C_IC_COMP_VERSION_BASE_IDX 
>0
>   #define mmCKSVII2C_IC_COMP_TYPE 
>0x006d
>   #define mmCKSVII2C_IC_COMP_TYPE_BASE_IDX
>0
> +#define mmCKSVII2C1_IC_CON   
>   0x0080
> +#define mmCKSVII2C1_IC_CON_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_TAR   
>   0x0081
> +#define mmCKSVII2C1_IC_TAR_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_SAR   
>   0x0082
> +#define mmCKSVII2C1_IC_SAR_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_MADDR  
>   0x0083
> +#define mmCKSVII2C1_IC_HS_MADDR_BASE_IDX 
>   0
> +#define mmCKSVII2C1_IC_DATA_CMD  
>   0x0084
> +#define mmCKSVII2C1_IC_DATA_CMD_BASE_IDX 
>   0
> +#define mmCKSVII2C1_IC_SS_SCL_HCNT   
>   0x0085
> +#define mmCKSVII2C1_IC_SS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_SS_SCL_LCNT   
>   0x0086
> +#define mmCKSVII2C1_IC_SS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_FS_SCL_HCNT   
>   0x0087
> +#define mmCKSVII2C1_IC_FS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_FS_SCL_LCNT   
>   0x0088
> +#define mmCKSVII2C1_IC_FS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_SCL_HCNT   
>   0x0089
> +#define mmCKSVII2C1_IC_HS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_SCL_LCNT   
>   0x008a
> +#define mmCKSVII2C1_IC_HS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_INTR_STAT 
>   0x008b
> +#define mmCKSVII2C1_IC_INTR_STAT_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_INTR_MASK 
>

[PATCH] drm/amdkfd: Fix race in gfx10 context restore handler

2019-09-25 Thread Cornwall, Jay
Missing synchronization with VGPR restore leads to intermittent
VGPR trashing in the user shader.

Signed-off-by: Jay Cornwall 
---
 drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h | 139 +++--
 .../gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm |   1 +
 2 files changed, 71 insertions(+), 69 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h 
b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
index 901fe35..d3400da 100644
--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
@@ -905,7 +905,7 @@ static const uint32_t cwsr_trap_gfx10_hex[] = {
0x7a5d, 0x807c817c,
0x807aff7a, 0x0080,
0xbf0a717c, 0xbf85fff8,
-   0xbf820141, 0xbef4037e,
+   0xbf820142, 0xbef4037e,
0x8775ff7f, 0x,
0x8875ff75, 0x0004,
0xbef60380, 0xbef703ff,
@@ -967,7 +967,7 @@ static const uint32_t cwsr_trap_gfx10_hex[] = {
0x725d, 0xe0304080,
0x725d0100, 0xe0304100,
0x725d0200, 0xe0304180,
-   0x725d0300, 0xbf820031,
+   0x725d0300, 0xbf820032,
0xbef603ff, 0x0100,
0xbef20378, 0x8078ff78,
0x0400, 0xbefc0384,
@@ -992,83 +992,84 @@ static const uint32_t cwsr_trap_gfx10_hex[] = {
0x725d, 0xe0304100,
0x725d0100, 0xe0304200,
0x725d0200, 0xe0304300,
-   0x725d0300, 0xb9782a05,
-   0x80788178, 0x907c9973,
-   0x877c817c, 0xbf06817c,
-   0xbf850002, 0x8f788978,
-   0xbf820001, 0x8f788a78,
-   0xb9721e06, 0x8f728a72,
-   0x80787278, 0x8078ff78,
-   0x0200, 0x80f8ff78,
-   0x0050, 0xbef603ff,
-   0x0100, 0xbefc03ff,
-   0x006c, 0x80f89078,
-   0xf429003a, 0xf000,
-   0xbf8cc07f, 0x80fc847c,
-   0xbf80, 0xbe803100,
-   0xbe823102, 0x80f8a078,
-   0xf42d003a, 0xf000,
-   0xbf8cc07f, 0x80fc887c,
-   0xbf80, 0xbe803100,
-   0xbe823102, 0xbe843104,
-   0xbe863106, 0x80f8c078,
-   0xf431003a, 0xf000,
-   0xbf8cc07f, 0x80fc907c,
-   0xbf80, 0xbe803100,
-   0xbe823102, 0xbe843104,
-   0xbe863106, 0xbe883108,
-   0xbe8a310a, 0xbe8c310c,
-   0xbe8e310e, 0xbf06807c,
-   0xbf84fff0, 0xb9782a05,
-   0x80788178, 0x907c9973,
-   0x877c817c, 0xbf06817c,
-   0xbf850002, 0x8f788978,
-   0xbf820001, 0x8f788a78,
-   0xb9721e06, 0x8f728a72,
-   0x80787278, 0x8078ff78,
-   0x0200, 0xbef603ff,
-   0x0100, 0xf4211bfa,
+   0x725d0300, 0xbf8c3f70,
+   0xb9782a05, 0x80788178,
+   0x907c9973, 0x877c817c,
+   0xbf06817c, 0xbf850002,
+   0x8f788978, 0xbf820001,
+   0x8f788a78, 0xb9721e06,
+   0x8f728a72, 0x80787278,
+   0x8078ff78, 0x0200,
+   0x80f8ff78, 0x0050,
+   0xbef603ff, 0x0100,
+   0xbefc03ff, 0x006c,
+   0x80f89078, 0xf429003a,
+   0xf000, 0xbf8cc07f,
+   0x80fc847c, 0xbf80,
+   0xbe803100, 0xbe823102,
+   0x80f8a078, 0xf42d003a,
+   0xf000, 0xbf8cc07f,
+   0x80fc887c, 0xbf80,
+   0xbe803100, 0xbe823102,
+   0xbe843104, 0xbe863106,
+   0x80f8c078, 0xf431003a,
+   0xf000, 0xbf8cc07f,
+   0x80fc907c, 0xbf80,
+   0xbe803100, 0xbe823102,
+   0xbe843104, 0xbe863106,
+   0xbe883108, 0xbe8a310a,
+   0xbe8c310c, 0xbe8e310e,
+   0xbf06807c, 0xbf84fff0,
+   0xb9782a05, 0x80788178,
+   0x907c9973, 0x877c817c,
+   0xbf06817c, 0xbf850002,
+   0x8f788978, 0xbf820001,
+   0x8f788a78, 0xb9721e06,
+   0x8f728a72, 0x80787278,
+   0x8078ff78, 0x0200,
+   0xbef603ff, 0x0100,
+   0xf4211bfa, 0xf000,
+   0x80788478, 0xf4211b3a,
0xf000, 0x80788478,
-   0xf4211b3a, 0xf000,
-   0x80788478, 0xf4211b7a,
+   0xf4211b7a, 0xf000,
+   0x80788478, 0xf4211eba,
0xf000, 0x80788478,
-   0xf4211eba, 0xf000,
-   0x80788478, 0xf4211efa,
+   0xf4211efa, 0xf000,
+   0x80788478, 0xf4211c3a,
0xf000, 0x80788478,
-   0xf4211c3a, 0xf000,
-   0x80788478, 0xf4211c7a,
+   0xf4211c7a, 0xf000,
+   0x80788478, 0xf4211e7a,
0xf000, 0x80788478,
-   0xf4211e7a, 0xf000,
-   0x80788478, 0xf4211cfa,
+   0xf4211cfa, 0xf000,
+   0x80788478, 0xf4211bba,
0xf000, 0x80788478,
+   0xbf8cc07f, 0xb9eef814,
0xf4211bba, 0xf000,
0x80788478, 0xbf8cc07f,
-   0xb9eef814, 0xf4211bba,
-   0xf000, 0x80788478,
-   0xbf8cc07f, 0xb9eef815,
-   0xbef2036d, 0x876dff72,
-   0x, 0xbefc036f,
-   0xbefe037a, 0xbeff037b,
-   0x876f71ff, 0x03ff,
-   0xb9ef4803, 0xb9f9f816,
-   0x876f71ff, 0xf800,
-   0x906f8b6f, 0xb9efa2c3,
-   0xb9f3f801, 0x876fff72,
-   0xfc00, 0x906f9a6f,
-   0x8f6f906f, 0xbef30380,
+   0xb9eef815, 0xbef2036d,
+   0x876dff72, 0x

[PATCH v4] drm/amdgpu/dm: Resume short HPD IRQs before resuming MST topology

2019-09-25 Thread Lyude Paul
Since we're going to be reprobing the entire topology state on resume
now using sideband transactions, we need to ensure that we actually have
short HPD irqs enabled before calling drm_dp_mst_topology_mgr_resume().
So, do that.

Changes since v4:
* Fix typo in comments

Cc: Juston Li 
Cc: Imre Deak 
Cc: Ville Syrjälä 
Cc: Harry Wentland 
Cc: Daniel Vetter 
Signed-off-by: Lyude Paul 
Acked-by: Alex Deucher 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 18927758a010..bce9a298bc45 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1186,15 +1186,15 @@ static int dm_resume(void *handle)
/* program HPD filter */
dc_resume(dm->dc);
 
-   /* On resume we need to  rewrite the MSTM control bits to enamble MST*/
-   s3_handle_mst(ddev, false);
-
/*
 * early enable HPD Rx IRQ, should be done before set mode as short
 * pulse interrupts are used for MST
 */
amdgpu_dm_irq_resume_early(adev);
 
+   /* On resume we need to rewrite the MSTM control bits to enable MST*/
+   s3_handle_mst(ddev, false);
+
/* Do detection*/
drm_connector_list_iter_begin(ddev, &iter);
drm_for_each_connector_iter(connector, &iter) {
-- 
2.21.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[pull] amdgpu drm-fixes-5.4

2019-09-25 Thread Alex Deucher
Hi Dave, Daniel,

More fixes for 5.4.  Nothing major.

The following changes since commit e16a7cbced7110866dcde84e504909ea85099bbd:

  drm/amdgpu: flag navi12 and 14 as experimental for 5.4 (2019-09-18 08:29:30 
-0500)

are available in the Git repository at:

  git://people.freedesktop.org/~agd5f/linux tags/drm-fixes-5.4-2019-09-25

for you to fetch changes up to 104c307147ad379617472dd91a5bcb368d72bd6d:

  drm/amd/display: prevent memory leak (2019-09-25 14:58:38 -0500)


drm-fixes-5.4-2019-09-25:

amdgpu:
- Fix a 64 bit divide
- Prevent a memory leak in a failure case in dc
- Load proper gfx firmware on navi14 variants


Alex Deucher (2):
  drm/amdgpu/display: fix 64 bit divide
  drm/amdgpu/display: include slab.h in dcn21_resource.c

Navid Emamdoost (1):
  drm/amd/display: prevent memory leak

Tianci.Yin (1):
  drm/amdgpu/gfx10: add support for wks firmware loading

 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 22 --
 .../amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c |  4 +++-
 .../drm/amd/display/dc/dce100/dce100_resource.c|  1 +
 .../drm/amd/display/dc/dce110/dce110_resource.c|  1 +
 .../drm/amd/display/dc/dce112/dce112_resource.c|  1 +
 .../drm/amd/display/dc/dce120/dce120_resource.c|  1 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_resource.c  |  1 +
 .../gpu/drm/amd/display/dc/dcn21/dcn21_resource.c  |  2 ++
 8 files changed, 26 insertions(+), 7 deletions(-)
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amdkfd: Use hex print format for pasid

2019-09-25 Thread Zhao, Yong
Since KFD pasid starts from 0x8000 (32768 in decimal), it is better
perceived as a hex number.

Change-Id: I565fe39f69e782749a697f18545775354c7a89f8
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c  | 12 +--
 drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c   |  4 ++--
 drivers/gpu/drm/amd/amdkfd/kfd_dbgmgr.c   |  8 
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 12 +--
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c |  8 
 drivers/gpu/drm/amd/amdkfd/kfd_events.c   | 12 +--
 drivers/gpu/drm/amd/amdkfd/kfd_iommu.c|  6 +++---
 drivers/gpu/drm/amd/amdkfd/kfd_process.c  | 20 +--
 .../amd/amdkfd/kfd_process_queue_manager.c|  6 +++---
 9 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index e5ff772862cd..106d45ae7c9b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -301,7 +301,7 @@ static int kfd_ioctl_create_queue(struct file *filep, 
struct kfd_process *p,
goto err_bind_process;
}
 
-   pr_debug("Creating queue for PASID %d on gpu 0x%x\n",
+   pr_debug("Creating queue for PASID 0x%x on gpu 0x%x\n",
p->pasid,
dev->id);
 
@@ -351,7 +351,7 @@ static int kfd_ioctl_destroy_queue(struct file *filp, 
struct kfd_process *p,
int retval;
struct kfd_ioctl_destroy_queue_args *args = data;
 
-   pr_debug("Destroying queue id %d for pasid %d\n",
+   pr_debug("Destroying queue id %d for pasid 0x%x\n",
args->queue_id,
p->pasid);
 
@@ -397,7 +397,7 @@ static int kfd_ioctl_update_queue(struct file *filp, struct 
kfd_process *p,
properties.queue_percent = args->queue_percentage;
properties.priority = args->queue_priority;
 
-   pr_debug("Updating queue id %d for pasid %d\n",
+   pr_debug("Updating queue id %d for pasid 0x%x\n",
args->queue_id, p->pasid);
 
mutex_lock(&p->mutex);
@@ -854,7 +854,7 @@ static int kfd_ioctl_get_process_apertures(struct file 
*filp,
struct kfd_process_device_apertures *pAperture;
struct kfd_process_device *pdd;
 
-   dev_dbg(kfd_device, "get apertures for PASID %d", p->pasid);
+   dev_dbg(kfd_device, "get apertures for PASID 0x%x", p->pasid);
 
args->num_of_nodes = 0;
 
@@ -912,7 +912,7 @@ static int kfd_ioctl_get_process_apertures_new(struct file 
*filp,
uint32_t nodes = 0;
int ret;
 
-   dev_dbg(kfd_device, "get apertures for PASID %d", p->pasid);
+   dev_dbg(kfd_device, "get apertures for PASID 0x%x", p->pasid);
 
if (args->num_of_nodes == 0) {
/* Return number of nodes, so that user space can alloacate
@@ -3063,7 +3063,7 @@ static int kfd_mmio_mmap(struct kfd_dev *dev, struct 
kfd_process *process,
 
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
-   pr_debug("Process %d mapping mmio page\n"
+   pr_debug("pasid 0x%x mapping mmio page\n"
 " target user address == 0x%08llX\n"
 " physical address== 0x%08llX\n"
 " vm_flags== 0x%04lX\n"
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
index 3635e0b4b3b7..492951cad143 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
@@ -800,7 +800,7 @@ int dbgdev_wave_reset_wavefronts(struct kfd_dev *dev, 
struct kfd_process *p)
(dev->kgd, vmid)) {
if (dev->kfd2kgd->get_atc_vmid_pasid_mapping_pasid
(dev->kgd, vmid) == p->pasid) {
-   pr_debug("Killing wave fronts of vmid %d and 
pasid %d\n",
+   pr_debug("Killing wave fronts of vmid %d and 
pasid 0x%x\n",
vmid, p->pasid);
break;
}
@@ -808,7 +808,7 @@ int dbgdev_wave_reset_wavefronts(struct kfd_dev *dev, 
struct kfd_process *p)
}
 
if (vmid > last_vmid_to_scan) {
-   pr_err("Didn't find vmid for pasid %d\n", p->pasid);
+   pr_err("Didn't find vmid for pasid 0x%x\n", p->pasid);
return -EFAULT;
}
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_dbgmgr.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_dbgmgr.c
index 9d4af961c5d1..9bfa50633654 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_dbgmgr.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_dbgmgr.c
@@ -96,7 +96,7 @@ bool kfd_dbgmgr_create(struct kfd_dbgmgr **ppmgr, struct 
kfd_dev *pdev)
 long kfd_dbgmgr_register(struct kfd_dbgmgr *pmgr, struct kfd_process *p)
 {
if (pmgr->pasid != 0) {
-   pr_debug("H/W debugger is a

RE: [PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

2019-09-25 Thread Russell, Kent
That's the fun part 😉 Working on adding checks for which chip is requested for 
shared functionality, or separate functions for separated functionality.

 Kent

-Original Message-
From: Grodzovsky, Andrey  
Sent: Wednesday, September 25, 2019 2:11 PM
To: Russell, Kent ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

Reviewed-by: Andrey Grodzovsky 

How are you planning to use them given the hard coded use of CSKVII2C (instance 
zero I2C engine) in I2C controller code ?

Andrey

On 9/25/19 5:03 PM, Russell, Kent wrote:
> These are the offsets for CKSVII2C1, and match up with the values 
> already added for CKSVII2C
>
> v2: Don't remove some of the CSKVII2C values
>
> Change-Id: I5ed88bb31253ccaf4ed4ae6f4959040c0da2f6d0
> Signed-off-by: Kent Russell 
> ---
>   .../asic_reg/smuio/smuio_11_0_0_offset.h  |  92 +
>   .../asic_reg/smuio/smuio_11_0_0_sh_mask.h | 176 ++
>   2 files changed, 268 insertions(+)
>
> diff --git 
> a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h 
> b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> index d3876052562b..687d6843c258 100644
> --- a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> +++ b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> @@ -121,6 +121,98 @@
>   #define mmCKSVII2C_IC_COMP_VERSION_BASE_IDX 
>0
>   #define mmCKSVII2C_IC_COMP_TYPE 
>0x006d
>   #define mmCKSVII2C_IC_COMP_TYPE_BASE_IDX
>0
> +#define mmCKSVII2C1_IC_CON   
>   0x0080
> +#define mmCKSVII2C1_IC_CON_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_TAR   
>   0x0081
> +#define mmCKSVII2C1_IC_TAR_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_SAR   
>   0x0082
> +#define mmCKSVII2C1_IC_SAR_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_MADDR  
>   0x0083
> +#define mmCKSVII2C1_IC_HS_MADDR_BASE_IDX 
>   0
> +#define mmCKSVII2C1_IC_DATA_CMD  
>   0x0084
> +#define mmCKSVII2C1_IC_DATA_CMD_BASE_IDX 
>   0
> +#define mmCKSVII2C1_IC_SS_SCL_HCNT   
>   0x0085
> +#define mmCKSVII2C1_IC_SS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_SS_SCL_LCNT   
>   0x0086
> +#define mmCKSVII2C1_IC_SS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_FS_SCL_HCNT   
>   0x0087
> +#define mmCKSVII2C1_IC_FS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_FS_SCL_LCNT   
>   0x0088
> +#define mmCKSVII2C1_IC_FS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_SCL_HCNT   
>   0x0089
> +#define mmCKSVII2C1_IC_HS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_SCL_LCNT   
>   0x008a
> +#define mmCKSVII2C1_IC_HS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_INTR_STAT 
>   0x008b
> +#define mmCKSVII2C1_IC_INTR_STAT_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_INTR_MASK 
>   0x008c
> +#define mmCKSVII2C1_IC_INTR_MASK_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_RAW_INTR_STAT 
>   0x008d
> +#define mmCKSVII2C1_IC_RAW_INTR_STAT_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_RX_TL 
> 

[PATCH] drm/amdkfd: use navi12 specific family id for navi12 code path

2019-09-25 Thread Liu, Shaoyun
Keep the same use of CHIP_IDs for navi12 in kfd

Change-Id: I5e52bbc058be51e79553147732a571a604537b7c
Signed-off-by: shaoyunl 
---
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_device.c   | 2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c  | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c   | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_topology.c | 1 +
 7 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
index 1ef3c32..0c327e0 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
@@ -676,6 +676,7 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
num_of_cache_types = ARRAY_SIZE(renoir_cache_info);
break;
case CHIP_NAVI10:
+   case CHIP_NAVI12:
case CHIP_NAVI14:
pcache_info = navi10_cache_info;
num_of_cache_types = ARRAY_SIZE(navi10_cache_info);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index 270389b..edfbae5c 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -388,7 +388,7 @@ static const struct kfd_device_info navi10_device_info = {
 };
 
 static const struct kfd_device_info navi12_device_info = {
-   .asic_family = CHIP_NAVI10,
+   .asic_family = CHIP_NAVI12,
.asic_name = "navi12",
.max_pasid_bits = 16,
.max_no_of_hqd  = 24,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 399a612..54f0c5cc 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -1798,6 +1798,7 @@ struct device_queue_manager 
*device_queue_manager_init(struct kfd_dev *dev)
device_queue_manager_init_v9(&dqm->asic_ops);
break;
case CHIP_NAVI10:
+   case CHIP_NAVI12:
case CHIP_NAVI14:
device_queue_manager_init_v10_navi10(&dqm->asic_ops);
break;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
index 4816614..450c991 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
@@ -408,6 +408,7 @@ int kfd_init_apertures(struct kfd_process *process)
case CHIP_RENOIR:
case CHIP_ARCTURUS:
case CHIP_NAVI10:
+   case CHIP_NAVI12:
case CHIP_NAVI14:
kfd_init_apertures_v9(pdd, id);
break;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
index 990ab54..11d2448 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
@@ -335,6 +335,7 @@ struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
kernel_queue_init_v9(&kq->ops_asic_specific);
break;
case CHIP_NAVI10:
+   case CHIP_NAVI12:
case CHIP_NAVI14:
kernel_queue_init_v10(&kq->ops_asic_specific);
break;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
index af62be0..83ef4b3 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager.c
@@ -244,6 +244,7 @@ int pm_init(struct packet_manager *pm, struct 
device_queue_manager *dqm)
pm->pmf = &kfd_v9_pm_funcs;
break;
case CHIP_NAVI10:
+   case CHIP_NAVI12:
case CHIP_NAVI14:
pm->pmf = &kfd_v10_pm_funcs;
break;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
index f2170f0..453832e 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
@@ -1320,6 +1320,7 @@ int kfd_topology_add_device(struct kfd_dev *gpu)
case CHIP_RENOIR:
case CHIP_ARCTURUS:
case CHIP_NAVI10:
+   case CHIP_NAVI12:
case CHIP_NAVI14:
dev->node_props.capability |= ((HSA_CAP_DOORBELL_TYPE_2_0 <<
HSA_CAP_DOORBELL_TYPE_TOTALBITS_SHIFT) &
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

2019-09-25 Thread Grodzovsky, Andrey
Reviewed-by: Andrey Grodzovsky 

How are you planning to use them given the hard coded use of CSKVII2C 
(instance zero I2C engine) in I2C controller code ?

Andrey

On 9/25/19 5:03 PM, Russell, Kent wrote:
> These are the offsets for CKSVII2C1, and match up with the values
> already added for CKSVII2C
>
> v2: Don't remove some of the CSKVII2C values
>
> Change-Id: I5ed88bb31253ccaf4ed4ae6f4959040c0da2f6d0
> Signed-off-by: Kent Russell 
> ---
>   .../asic_reg/smuio/smuio_11_0_0_offset.h  |  92 +
>   .../asic_reg/smuio/smuio_11_0_0_sh_mask.h | 176 ++
>   2 files changed, 268 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h 
> b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> index d3876052562b..687d6843c258 100644
> --- a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> +++ b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
> @@ -121,6 +121,98 @@
>   #define mmCKSVII2C_IC_COMP_VERSION_BASE_IDX 
>0
>   #define mmCKSVII2C_IC_COMP_TYPE 
>0x006d
>   #define mmCKSVII2C_IC_COMP_TYPE_BASE_IDX
>0
> +#define mmCKSVII2C1_IC_CON   
>   0x0080
> +#define mmCKSVII2C1_IC_CON_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_TAR   
>   0x0081
> +#define mmCKSVII2C1_IC_TAR_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_SAR   
>   0x0082
> +#define mmCKSVII2C1_IC_SAR_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_MADDR  
>   0x0083
> +#define mmCKSVII2C1_IC_HS_MADDR_BASE_IDX 
>   0
> +#define mmCKSVII2C1_IC_DATA_CMD  
>   0x0084
> +#define mmCKSVII2C1_IC_DATA_CMD_BASE_IDX 
>   0
> +#define mmCKSVII2C1_IC_SS_SCL_HCNT   
>   0x0085
> +#define mmCKSVII2C1_IC_SS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_SS_SCL_LCNT   
>   0x0086
> +#define mmCKSVII2C1_IC_SS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_FS_SCL_HCNT   
>   0x0087
> +#define mmCKSVII2C1_IC_FS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_FS_SCL_LCNT   
>   0x0088
> +#define mmCKSVII2C1_IC_FS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_SCL_HCNT   
>   0x0089
> +#define mmCKSVII2C1_IC_HS_SCL_HCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_HS_SCL_LCNT   
>   0x008a
> +#define mmCKSVII2C1_IC_HS_SCL_LCNT_BASE_IDX  
>   0
> +#define mmCKSVII2C1_IC_INTR_STAT 
>   0x008b
> +#define mmCKSVII2C1_IC_INTR_STAT_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_INTR_MASK 
>   0x008c
> +#define mmCKSVII2C1_IC_INTR_MASK_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_RAW_INTR_STAT 
>   0x008d
> +#define mmCKSVII2C1_IC_RAW_INTR_STAT_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_RX_TL 
>   0x008e
> +#define mmCKSVII2C1_IC_RX_TL_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_TX_TL 
>   0x008f
> +#define mmCKSVII2C1_IC_TX_TL_BASE_IDX
>   0
> +#define mmCKSVII2C1_IC_CLR_INTR   

Re: [PATCH] drm/amdgpu: return tcc_disabled_mask to userspace

2019-09-25 Thread Marek Olšák
I think TCCs are global, because all memory traffic from gfx
engines+cp+sdma has to go through TCCs, e.g. memory requests from different
SEs accessing the same memory address go to the same TCC.

Marek

On Tue, Sep 24, 2019 at 10:58 PM Alex Deucher  wrote:

> On Tue, Sep 24, 2019 at 6:29 PM Marek Olšák  wrote:
> >
> > From: Marek Olšák 
> >
> > UMDs need this for correct programming of harvested chips.
> >
> > Signed-off-by: Marek Olšák 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |  3 ++-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h |  1 +
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  2 ++
> >  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c  | 11 +++
> >  include/uapi/drm/amdgpu_drm.h   |  2 ++
> >  5 files changed, 18 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > index f82d634cf3f9..b70b30378c20 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > @@ -75,23 +75,24 @@
> >   * - 3.25.0 - Add support for sensor query info (stable pstate
> sclk/mclk).
> >   * - 3.26.0 - GFX9: Process AMDGPU_IB_FLAG_TC_WB_NOT_INVALIDATE.
> >   * - 3.27.0 - Add new chunk to to AMDGPU_CS to enable BO_LIST creation.
> >   * - 3.28.0 - Add AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES
> >   * - 3.29.0 - Add AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID
> >   * - 3.30.0 - Add AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE.
> >   * - 3.31.0 - Add support for per-flip tiling attribute changes with DC
> >   * - 3.32.0 - Add syncobj timeline support to AMDGPU_CS.
> >   * - 3.33.0 - Fixes for GDS ENOMEM failures in AMDGPU_CS.
> >   * - 3.34.0 - Non-DC can flip correctly between buffers with different
> pitches
> > + * - 3.35.0 - Add drm_amdgpu_info_device::tcc_disabled_mask
> >   */
> >  #define KMS_DRIVER_MAJOR   3
> > -#define KMS_DRIVER_MINOR   34
> > +#define KMS_DRIVER_MINOR   35
> >  #define KMS_DRIVER_PATCHLEVEL  0
> >
> >  #define AMDGPU_MAX_TIMEOUT_PARAM_LENTH 256
> >
> >  int amdgpu_vram_limit = 0;
> >  int amdgpu_vis_vram_limit = 0;
> >  int amdgpu_gart_size = -1; /* auto */
> >  int amdgpu_gtt_size = -1; /* auto */
> >  int amdgpu_moverate = -1; /* auto */
> >  int amdgpu_benchmarking = 0;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > index 59c5464c96be..88dccff41dff 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > @@ -158,20 +158,21 @@ struct amdgpu_gfx_config {
> > struct amdgpu_rb_config
> rb_config[AMDGPU_GFX_MAX_SE][AMDGPU_GFX_MAX_SH_PER_SE];
> >
> > /* gfx configure feature */
> > uint32_t double_offchip_lds_buf;
> > /* cached value of DB_DEBUG2 */
> > uint32_t db_debug2;
> > /* gfx10 specific config */
> > uint32_t num_sc_per_sh;
> > uint32_t num_packer_per_sc;
> > uint32_t pa_sc_tile_steering_override;
> > +   uint64_t tcc_disabled_mask;
> >  };
> >
> >  struct amdgpu_cu_info {
> > uint32_t simd_per_cu;
> > uint32_t max_waves_per_simd;
> > uint32_t wave_front_size;
> > uint32_t max_scratch_slots_per_cu;
> > uint32_t lds_size;
> >
> > /* total active CU number */
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> > index 91f5aaf99861..7356efe7e2d3 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
> > @@ -775,20 +775,22 @@ static int amdgpu_info_ioctl(struct drm_device
> *dev, void *data, struct drm_file
> > dev_info.num_cu_per_sh = adev->gfx.config.max_cu_per_sh;
> > dev_info.num_tcc_blocks =
> adev->gfx.config.max_texture_channel_caches;
> > dev_info.gs_vgt_table_depth =
> adev->gfx.config.gs_vgt_table_depth;
> > dev_info.gs_prim_buffer_depth =
> adev->gfx.config.gs_prim_buffer_depth;
> > dev_info.max_gs_waves_per_vgt =
> adev->gfx.config.max_gs_threads;
> >
> > if (adev->family >= AMDGPU_FAMILY_NV)
> > dev_info.pa_sc_tile_steering_override =
> >
>  adev->gfx.config.pa_sc_tile_steering_override;
> >
> > +   dev_info.tcc_disabled_mask =
> adev->gfx.config.tcc_disabled_mask;
> > +
> > return copy_to_user(out, &dev_info,
> > min((size_t)size, sizeof(dev_info)))
> ? -EFAULT : 0;
> > }
> > case AMDGPU_INFO_VCE_CLOCK_TABLE: {
> > unsigned i;
> > struct drm_amdgpu_info_vce_clock_table vce_clk_table =
> {};
> > struct amd_vce_state *vce_state;
> >
> > for (i = 0; i < AMDGPU_VCE_CLOCK_TABLE_ENTRIES; i++) {
> > vce_state = amdgpu_dpm_get_vce_clock_state(adev,
> i);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> b/drivers/gpu/drm

[PATCH] drm/amdgpu: Add SMUIO values for other I2C controller v2

2019-09-25 Thread Russell, Kent
These are the offsets for CKSVII2C1, and match up with the values
already added for CKSVII2C

v2: Don't remove some of the CSKVII2C values

Change-Id: I5ed88bb31253ccaf4ed4ae6f4959040c0da2f6d0
Signed-off-by: Kent Russell 
---
 .../asic_reg/smuio/smuio_11_0_0_offset.h  |  92 +
 .../asic_reg/smuio/smuio_11_0_0_sh_mask.h | 176 ++
 2 files changed, 268 insertions(+)

diff --git a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h 
b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
index d3876052562b..687d6843c258 100644
--- a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
+++ b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
@@ -121,6 +121,98 @@
 #define mmCKSVII2C_IC_COMP_VERSION_BASE_IDX
0
 #define mmCKSVII2C_IC_COMP_TYPE
0x006d
 #define mmCKSVII2C_IC_COMP_TYPE_BASE_IDX   
0
+#define mmCKSVII2C1_IC_CON 
0x0080
+#define mmCKSVII2C1_IC_CON_BASE_IDX
0
+#define mmCKSVII2C1_IC_TAR 
0x0081
+#define mmCKSVII2C1_IC_TAR_BASE_IDX
0
+#define mmCKSVII2C1_IC_SAR 
0x0082
+#define mmCKSVII2C1_IC_SAR_BASE_IDX
0
+#define mmCKSVII2C1_IC_HS_MADDR
0x0083
+#define mmCKSVII2C1_IC_HS_MADDR_BASE_IDX   
0
+#define mmCKSVII2C1_IC_DATA_CMD
0x0084
+#define mmCKSVII2C1_IC_DATA_CMD_BASE_IDX   
0
+#define mmCKSVII2C1_IC_SS_SCL_HCNT 
0x0085
+#define mmCKSVII2C1_IC_SS_SCL_HCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_SS_SCL_LCNT 
0x0086
+#define mmCKSVII2C1_IC_SS_SCL_LCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_FS_SCL_HCNT 
0x0087
+#define mmCKSVII2C1_IC_FS_SCL_HCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_FS_SCL_LCNT 
0x0088
+#define mmCKSVII2C1_IC_FS_SCL_LCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_HS_SCL_HCNT 
0x0089
+#define mmCKSVII2C1_IC_HS_SCL_HCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_HS_SCL_LCNT 
0x008a
+#define mmCKSVII2C1_IC_HS_SCL_LCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_INTR_STAT   
0x008b
+#define mmCKSVII2C1_IC_INTR_STAT_BASE_IDX  
0
+#define mmCKSVII2C1_IC_INTR_MASK   
0x008c
+#define mmCKSVII2C1_IC_INTR_MASK_BASE_IDX  
0
+#define mmCKSVII2C1_IC_RAW_INTR_STAT   
0x008d
+#define mmCKSVII2C1_IC_RAW_INTR_STAT_BASE_IDX  
0
+#define mmCKSVII2C1_IC_RX_TL   
0x008e
+#define mmCKSVII2C1_IC_RX_TL_BASE_IDX  
0
+#define mmCKSVII2C1_IC_TX_TL   
0x008f
+#define mmCKSVII2C1_IC_TX_TL_BASE_IDX  
0
+#define mmCKSVII2C1_IC_CLR_INTR
0x0090
+#define mmCKSVII2C1_IC_CLR_INTR_BASE_IDX   
0
+#define mmCKSVII2C1_IC_CLR_RX_UNDER
0x0091
+#define mmCKSVII2C1_IC_CLR_RX_UNDER_BASE_IDX   
  

Re: [PATCH v2 20/27] drm/dp_mst: Protect drm_dp_mst_port members with connection_mutex

2019-09-25 Thread Lyude Paul
On Wed, 2019-09-25 at 16:00 -0400, Sean Paul wrote:
> On Tue, Sep 03, 2019 at 04:45:58PM -0400, Lyude Paul wrote:
> > Yes-you read that right. Currently there is literally no locking in
> > place for any of the drm_dp_mst_port struct members that can be modified
> > in response to a link address response, or a connection status response.
> > Which literally means if we're unlucky enough to have any sort of
> > hotplugging event happen before we're finished with reprobing link
> > addresses, we'll race and the contents of said struct members becomes
> > undefined. Fun!
> > 
> > So, finally add some simple locking protections to our MST helpers by
> > protecting any drm_dp_mst_port members which can be changed by link
> > address responses or connection status notifications under
> > drm_device->mode_config.connection_mutex.
> > 
> > Cc: Juston Li 
> > Cc: Imre Deak 
> > Cc: Ville Syrjälä 
> > Cc: Harry Wentland 
> > Cc: Daniel Vetter 
> > Signed-off-by: Lyude Paul 
> > ---
> >  drivers/gpu/drm/drm_dp_mst_topology.c | 144 +++---
> >  include/drm/drm_dp_mst_helper.h   |  39 +--
> >  2 files changed, 133 insertions(+), 50 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index 5101eeab4485..259634c5d6dc 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -1354,6 +1354,7 @@ static void drm_dp_free_mst_port(struct kref *kref)
> > container_of(kref, struct drm_dp_mst_port, malloc_kref);
> >  
> > drm_dp_mst_put_mstb_malloc(port->parent);
> > +   mutex_destroy(&port->lock);
> > kfree(port);
> >  }
> >  
> > @@ -1906,6 +1907,36 @@ void drm_dp_mst_connector_early_unregister(struct
> > drm_connector *connector,
> >  }
> >  EXPORT_SYMBOL(drm_dp_mst_connector_early_unregister);
> >  
> > +static void
> > +drm_dp_mst_port_add_connector(struct drm_dp_mst_branch *mstb,
> > + struct drm_dp_mst_port *port)
> > +{
> > +   struct drm_dp_mst_topology_mgr *mgr = port->mgr;
> > +   char proppath[255];
> > +   int ret;
> > +
> > +   build_mst_prop_path(mstb, port->port_num, proppath, sizeof(proppath));
> > +   port->connector = mgr->cbs->add_connector(mgr, port, proppath);
> > +   if (!port->connector) {
> > +   ret = -ENOMEM;
> > +   goto error;
> > +   }
> > +
> > +   if ((port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
> > +port->pdt == DP_PEER_DEVICE_SST_SINK) &&
> > +   port->port_num >= DP_MST_LOGICAL_PORT_0) {
> > +   port->cached_edid = drm_get_edid(port->connector,
> > +&port->aux.ddc);
> > +   drm_connector_set_tile_property(port->connector);
> > +   }
> > +
> > +   mgr->cbs->register_connector(port->connector);
> > +   return;
> > +
> > +error:
> > +   DRM_ERROR("Failed to create connector for port %p: %d\n", port, ret);
> > +}
> > +
> >  static void
> >  drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
> > struct drm_device *dev,
> > @@ -1913,8 +1944,12 @@ drm_dp_mst_handle_link_address_port(struct
> > drm_dp_mst_branch *mstb,
> >  {
> > struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
> > struct drm_dp_mst_port *port;
> > -   bool created = false;
> > -   int old_ddps = 0;
> > +   struct drm_dp_mst_branch *child_mstb = NULL;
> > +   struct drm_connector *connector_to_destroy = NULL;
> > +   int old_ddps = 0, ret;
> > +   u8 new_pdt = DP_PEER_DEVICE_NONE;
> > +   bool created = false, send_link_addr = false,
> > +create_connector = false;
> >  
> > port = drm_dp_get_port(mstb, port_msg->port_number);
> > if (!port) {
> > @@ -1923,6 +1958,7 @@ drm_dp_mst_handle_link_address_port(struct
> > drm_dp_mst_branch *mstb,
> > return;
> > kref_init(&port->topology_kref);
> > kref_init(&port->malloc_kref);
> > +   mutex_init(&port->lock);
> > port->parent = mstb;
> > port->port_num = port_msg->port_number;
> > port->mgr = mgr;
> > @@ -1937,11 +1973,17 @@ drm_dp_mst_handle_link_address_port(struct
> > drm_dp_mst_branch *mstb,
> > drm_dp_mst_get_mstb_malloc(mstb);
> >  
> > created = true;
> > -   } else {
> > -   old_ddps = port->ddps;
> > }
> >  
> > +   mutex_lock(&port->lock);
> > +   drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
> > +
> > +   if (!created)
> > +   old_ddps = port->ddps;
> > +
> > port->input = port_msg->input_port;
> > +   if (!port->input)
> > +   new_pdt = port_msg->peer_device_type;
> > port->mcs = port_msg->mcs;
> > port->ddps = port_msg->ddps;
> > port->ldps = port_msg->legacy_device_plug_status;
> > @@ -1969,44 +2011,58 @@ drm_dp_mst_handle_link_address_port(struct
> > drm_dp_mst_branch *mstb,
> > }
> > }
> >  
> > -   if (!port->input) {
> > -   int ret = d

[PATCH] drm/amdgpu: Add SMUIO values for other I2C controller

2019-09-25 Thread Russell, Kent
These are the offsets for CKSVII2C1, and match up with the values
already added for CKSVII2C

Change-Id: I5ed88bb31253ccaf4ed4ae6f4959040c0da2f6d0
Signed-off-by: Kent Russell 
---
 .../asic_reg/smuio/smuio_11_0_0_offset.h  |  92 +
 .../asic_reg/smuio/smuio_11_0_0_sh_mask.h | 192 --
 2 files changed, 268 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h 
b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
index d3876052562b..687d6843c258 100644
--- a/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
+++ b/drivers/gpu/drm/amd/include/asic_reg/smuio/smuio_11_0_0_offset.h
@@ -121,6 +121,98 @@
 #define mmCKSVII2C_IC_COMP_VERSION_BASE_IDX
0
 #define mmCKSVII2C_IC_COMP_TYPE
0x006d
 #define mmCKSVII2C_IC_COMP_TYPE_BASE_IDX   
0
+#define mmCKSVII2C1_IC_CON 
0x0080
+#define mmCKSVII2C1_IC_CON_BASE_IDX
0
+#define mmCKSVII2C1_IC_TAR 
0x0081
+#define mmCKSVII2C1_IC_TAR_BASE_IDX
0
+#define mmCKSVII2C1_IC_SAR 
0x0082
+#define mmCKSVII2C1_IC_SAR_BASE_IDX
0
+#define mmCKSVII2C1_IC_HS_MADDR
0x0083
+#define mmCKSVII2C1_IC_HS_MADDR_BASE_IDX   
0
+#define mmCKSVII2C1_IC_DATA_CMD
0x0084
+#define mmCKSVII2C1_IC_DATA_CMD_BASE_IDX   
0
+#define mmCKSVII2C1_IC_SS_SCL_HCNT 
0x0085
+#define mmCKSVII2C1_IC_SS_SCL_HCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_SS_SCL_LCNT 
0x0086
+#define mmCKSVII2C1_IC_SS_SCL_LCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_FS_SCL_HCNT 
0x0087
+#define mmCKSVII2C1_IC_FS_SCL_HCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_FS_SCL_LCNT 
0x0088
+#define mmCKSVII2C1_IC_FS_SCL_LCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_HS_SCL_HCNT 
0x0089
+#define mmCKSVII2C1_IC_HS_SCL_HCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_HS_SCL_LCNT 
0x008a
+#define mmCKSVII2C1_IC_HS_SCL_LCNT_BASE_IDX
0
+#define mmCKSVII2C1_IC_INTR_STAT   
0x008b
+#define mmCKSVII2C1_IC_INTR_STAT_BASE_IDX  
0
+#define mmCKSVII2C1_IC_INTR_MASK   
0x008c
+#define mmCKSVII2C1_IC_INTR_MASK_BASE_IDX  
0
+#define mmCKSVII2C1_IC_RAW_INTR_STAT   
0x008d
+#define mmCKSVII2C1_IC_RAW_INTR_STAT_BASE_IDX  
0
+#define mmCKSVII2C1_IC_RX_TL   
0x008e
+#define mmCKSVII2C1_IC_RX_TL_BASE_IDX  
0
+#define mmCKSVII2C1_IC_TX_TL   
0x008f
+#define mmCKSVII2C1_IC_TX_TL_BASE_IDX  
0
+#define mmCKSVII2C1_IC_CLR_INTR
0x0090
+#define mmCKSVII2C1_IC_CLR_INTR_BASE_IDX   
0
+#define mmCKSVII2C1_IC_CLR_RX_UNDER
0x0091
+#define mmCKSVII2C1_IC_CLR_RX_UNDER_BASE_IDX   
0
+#define mmCK

Re: [PATCH v2 16/27] drm/dp_mst: Refactor pdt setup/teardown, add more locking

2019-09-25 Thread Lyude Paul
On Wed, 2019-09-25 at 15:27 -0400, Sean Paul wrote:
> On Tue, Sep 03, 2019 at 04:45:54PM -0400, Lyude Paul wrote:
> > Since we're going to be implementing suspend/resume reprobing very soon,
> > we need to make sure we are extra careful to ensure that our locking
> > actually protects the topology state where we expect it to. Turns out
> > this isn't the case with drm_dp_port_setup_pdt() and
> > drm_dp_port_teardown_pdt(), both of which change port->mstb without
> > grabbing &mgr->lock.
> > 
> > Additionally, since most callers of these functions are just using it to
> > teardown the port's previous PDT and setup a new one we can simplify
> > things a bit and combine drm_dp_port_setup_pdt() and
> > drm_dp_port_teardown_pdt() into a single function:
> > drm_dp_port_set_pdt(). This function also handles actually ensuring that
> > we grab the correct locks when we need to modify port->mstb.
> > 
> > Cc: Juston Li 
> > Cc: Imre Deak 
> > Cc: Ville Syrjälä 
> > Cc: Harry Wentland 
> > Cc: Daniel Vetter 
> > Signed-off-by: Lyude Paul 
> > ---
> >  drivers/gpu/drm/drm_dp_mst_topology.c | 181 +++---
> >  include/drm/drm_dp_mst_helper.h   |   6 +-
> >  2 files changed, 110 insertions(+), 77 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index d1610434a0cb..9944ef2ce885 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -1487,24 +1487,6 @@ drm_dp_mst_topology_put_mstb(struct
> > drm_dp_mst_branch *mstb)
> > kref_put(&mstb->topology_kref, drm_dp_destroy_mst_branch_device);
> >  }
> >  
> > -static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int
> > old_pdt)
> > -{
> > -   struct drm_dp_mst_branch *mstb;
> > -
> > -   switch (old_pdt) {
> > -   case DP_PEER_DEVICE_DP_LEGACY_CONV:
> > -   case DP_PEER_DEVICE_SST_SINK:
> > -   /* remove i2c over sideband */
> > -   drm_dp_mst_unregister_i2c_bus(&port->aux);
> > -   break;
> > -   case DP_PEER_DEVICE_MST_BRANCHING:
> > -   mstb = port->mstb;
> > -   port->mstb = NULL;
> > -   drm_dp_mst_topology_put_mstb(mstb);
> > -   break;
> > -   }
> > -}
> > -
> >  static void drm_dp_destroy_port(struct kref *kref)
> >  {
> > struct drm_dp_mst_port *port =
> > @@ -1714,38 +1696,79 @@ static u8 drm_dp_calculate_rad(struct
> > drm_dp_mst_port *port,
> > return parent_lct + 1;
> >  }
> >  
> > -/*
> > - * return sends link address for new mstb
> > - */
> > -static bool drm_dp_port_setup_pdt(struct drm_dp_mst_port *port)
> > +static int drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt)
> >  {
> > -   int ret;
> > -   u8 rad[6], lct;
> > -   bool send_link = false;
> > +   struct drm_dp_mst_topology_mgr *mgr = port->mgr;
> > +   struct drm_dp_mst_branch *mstb;
> > +   u8 rad[8], lct;
> > +   int ret = 0;
> > +
> > +   if (port->pdt == new_pdt)
> 
> Shouldn't we also ensure that access to port->pdt is also locked?
> 

It's specifically port->mstb that needs to be protected under lock. We don't
use port->pdt for traversing the topology at all, so keeping it under
connection_mutex is sufficient.

> Sean
> 
> > +   return 0;
> > +
> > +   /* Teardown the old pdt, if there is one */
> > +   switch (port->pdt) {
> > +   case DP_PEER_DEVICE_DP_LEGACY_CONV:
> > +   case DP_PEER_DEVICE_SST_SINK:
> > +   /*
> > +* If the new PDT would also have an i2c bus, don't bother
> > +* with reregistering it
> > +*/
> > +   if (new_pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
> > +   new_pdt == DP_PEER_DEVICE_SST_SINK) {
> > +   port->pdt = new_pdt;
> > +   return 0;
> > +   }
> > +
> > +   /* remove i2c over sideband */
> > +   drm_dp_mst_unregister_i2c_bus(&port->aux);
> > +   break;
> > +   case DP_PEER_DEVICE_MST_BRANCHING:
> > +   mutex_lock(&mgr->lock);
> > +   drm_dp_mst_topology_put_mstb(port->mstb);
> > +   port->mstb = NULL;
> > +   mutex_unlock(&mgr->lock);
> > +   break;
> > +   }
> > +
> > +   port->pdt = new_pdt;
> > switch (port->pdt) {
> > case DP_PEER_DEVICE_DP_LEGACY_CONV:
> > case DP_PEER_DEVICE_SST_SINK:
> > /* add i2c over sideband */
> > ret = drm_dp_mst_register_i2c_bus(&port->aux);
> > break;
> > +
> > case DP_PEER_DEVICE_MST_BRANCHING:
> > lct = drm_dp_calculate_rad(port, rad);
> > +   mstb = drm_dp_add_mst_branch_device(lct, rad);
> > +   if (!mstb) {
> > +   ret = -ENOMEM;
> > +   DRM_ERROR("Failed to create MSTB for port %p", port);
> > +   goto out;
> > +   }
> >  
> > -   port->mstb = drm_dp_add_mst_branch_device(lct, rad);
> > -   if (port->mstb) {
> > -   port->mstb->mgr = port->mgr;
> > -   

Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

2019-09-25 Thread Kuehling, Felix
Agreed. KFD is part of amdgpu. We shouldn't use the CHIP_ IDs 
differently in KFD. The code duplication is minimal and we've done it 
for all chips so far. E.g. Fiji and all the Polaris versions are treated 
the same in KFD. Similarly Vega10, Vega20 and Arcturus are the same for 
most purposes.

Regards,
   Felix

On 2019-09-25 4:12 p.m., Alex Deucher wrote:
> I think it would be cleaner to add navi12 to all of the relevant
> cases.  We should double check what we did for navi14 as well.
>
> Alex
>
> On Wed, Sep 25, 2019 at 4:09 PM Liu, Shaoyun  wrote:
>> I sent out another change that set the  asic_family as CHIP_NAVI10 since 
>> from KFD side there is no difference for navi10 and  navi12.
>>
>> Regards
>> Shaoyun.liu
>>
>> -Original Message-
>> From: Kuehling, Felix 
>> Sent: Wednesday, September 25, 2019 11:23 AM
>> To: Liu, Shaoyun ; amd-gfx@lists.freedesktop.org
>> Subject: Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side
>>
>> You'll also need to add "case CHIP_NAVI12:" in a bunch of places. Grep for 
>> "CHIP_NAVI10" and you'll find them all pretty quickly.
>>
>> Regards,
>> Felix
>>
>> On 2019-09-24 6:13 p.m., Liu, Shaoyun wrote:
>>> Add device info for both navi12 PF and VF
>>>
>>> Change-Id: Ifb4035e65c12d153fc30e593fe109f9c7e0541f4
>>> Signed-off-by: shaoyunl 
>>> ---
>>>drivers/gpu/drm/amd/amdkfd/kfd_device.c | 19 +++
>>>1 file changed, 19 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
>>> b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
>>> index f329b82..edfbae5c 100644
>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
>>> @@ -387,6 +387,24 @@ static const struct kfd_device_info navi10_device_info 
>>> = {
>>>.num_sdma_queues_per_engine = 8,
>>>};
>>>
>>> +static const struct kfd_device_info navi12_device_info = {
>>> + .asic_family = CHIP_NAVI12,
>>> + .asic_name = "navi12",
>>> + .max_pasid_bits = 16,
>>> + .max_no_of_hqd  = 24,
>>> + .doorbell_size  = 8,
>>> + .ih_ring_entry_size = 8 * sizeof(uint32_t),
>>> + .event_interrupt_class = &event_interrupt_class_v9,
>>> + .num_of_watch_points = 4,
>>> + .mqd_size_aligned = MQD_SIZE_ALIGNED,
>>> + .needs_iommu_device = false,
>>> + .supports_cwsr = true,
>>> + .needs_pci_atomics = false,
>>> + .num_sdma_engines = 2,
>>> + .num_xgmi_sdma_engines = 0,
>>> + .num_sdma_queues_per_engine = 8,
>>> +};
>>> +
>>>static const struct kfd_device_info navi14_device_info = {
>>>.asic_family = CHIP_NAVI14,
>>>.asic_name = "navi14",
>>> @@ -425,6 +443,7 @@ static const struct kfd_device_info 
>>> *kfd_supported_devices[][2] = {
>>>[CHIP_RENOIR] = {&renoir_device_info, NULL},
>>>[CHIP_ARCTURUS] = {&arcturus_device_info, &arcturus_device_info},
>>>[CHIP_NAVI10] = {&navi10_device_info, NULL},
>>> + [CHIP_NAVI12] = {&navi12_device_info, &navi12_device_info},
>>>[CHIP_NAVI14] = {&navi14_device_info, NULL},
>>>};
>>>
>> ___
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH] drm/amdgpu: simplify gds_compute_max_wave_id computation

2019-09-25 Thread Marek Olšák
From: Marek Olšák 

Signed-off-by: Marek Olšák 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 +
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ca01643fa0c8..73cd254449b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -5275,29 +5275,26 @@ static void gfx_v10_0_set_rlc_funcs(struct 
amdgpu_device *adev)
case CHIP_NAVI12:
adev->gfx.rlc.funcs = &gfx_v10_0_rlc_funcs;
break;
default:
break;
}
 }
 
 static void gfx_v10_0_set_gds_init(struct amdgpu_device *adev)
 {
-   /* init asic gds info */
-   switch (adev->asic_type) {
-   case CHIP_NAVI10:
-   default:
-   adev->gds.gds_size = 0x1;
-   adev->gds.gds_compute_max_wave_id = 0x4ff;
-   break;
-   }
+   unsigned total_cu = adev->gfx.config.max_cu_per_sh *
+   adev->gfx.config.max_sh_per_se *
+   adev->gfx.config.max_shader_engines;
 
+   adev->gds.gds_size = 0x1;
+   adev->gds.gds_compute_max_wave_id = total_cu * 32 - 1;
adev->gds.gws_size = 64;
adev->gds.oa_size = 16;
 }
 
 static void gfx_v10_0_set_user_wgp_inactive_bitmap_per_sh(struct amdgpu_device 
*adev,
  u32 bitmap)
 {
u32 data;
 
if (!bitmap)
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

2019-09-25 Thread Alex Deucher
I think it would be cleaner to add navi12 to all of the relevant
cases.  We should double check what we did for navi14 as well.

Alex

On Wed, Sep 25, 2019 at 4:09 PM Liu, Shaoyun  wrote:
>
> I sent out another change that set the  asic_family as CHIP_NAVI10 since from 
> KFD side there is no difference for navi10 and  navi12.
>
> Regards
> Shaoyun.liu
>
> -Original Message-
> From: Kuehling, Felix 
> Sent: Wednesday, September 25, 2019 11:23 AM
> To: Liu, Shaoyun ; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side
>
> You'll also need to add "case CHIP_NAVI12:" in a bunch of places. Grep for 
> "CHIP_NAVI10" and you'll find them all pretty quickly.
>
> Regards,
>Felix
>
> On 2019-09-24 6:13 p.m., Liu, Shaoyun wrote:
> > Add device info for both navi12 PF and VF
> >
> > Change-Id: Ifb4035e65c12d153fc30e593fe109f9c7e0541f4
> > Signed-off-by: shaoyunl 
> > ---
> >   drivers/gpu/drm/amd/amdkfd/kfd_device.c | 19 +++
> >   1 file changed, 19 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> > b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> > index f329b82..edfbae5c 100644
> > --- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> > +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> > @@ -387,6 +387,24 @@ static const struct kfd_device_info navi10_device_info 
> > = {
> >   .num_sdma_queues_per_engine = 8,
> >   };
> >
> > +static const struct kfd_device_info navi12_device_info = {
> > + .asic_family = CHIP_NAVI12,
> > + .asic_name = "navi12",
> > + .max_pasid_bits = 16,
> > + .max_no_of_hqd  = 24,
> > + .doorbell_size  = 8,
> > + .ih_ring_entry_size = 8 * sizeof(uint32_t),
> > + .event_interrupt_class = &event_interrupt_class_v9,
> > + .num_of_watch_points = 4,
> > + .mqd_size_aligned = MQD_SIZE_ALIGNED,
> > + .needs_iommu_device = false,
> > + .supports_cwsr = true,
> > + .needs_pci_atomics = false,
> > + .num_sdma_engines = 2,
> > + .num_xgmi_sdma_engines = 0,
> > + .num_sdma_queues_per_engine = 8,
> > +};
> > +
> >   static const struct kfd_device_info navi14_device_info = {
> >   .asic_family = CHIP_NAVI14,
> >   .asic_name = "navi14",
> > @@ -425,6 +443,7 @@ static const struct kfd_device_info 
> > *kfd_supported_devices[][2] = {
> >   [CHIP_RENOIR] = {&renoir_device_info, NULL},
> >   [CHIP_ARCTURUS] = {&arcturus_device_info, &arcturus_device_info},
> >   [CHIP_NAVI10] = {&navi10_device_info, NULL},
> > + [CHIP_NAVI12] = {&navi12_device_info, &navi12_device_info},
> >   [CHIP_NAVI14] = {&navi14_device_info, NULL},
> >   };
> >
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

2019-09-25 Thread Liu, Shaoyun
I sent out another change that set the  asic_family as CHIP_NAVI10 since from 
KFD side there is no difference for navi10 and  navi12. 

Regards
Shaoyun.liu

-Original Message-
From: Kuehling, Felix  
Sent: Wednesday, September 25, 2019 11:23 AM
To: Liu, Shaoyun ; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

You'll also need to add "case CHIP_NAVI12:" in a bunch of places. Grep for 
"CHIP_NAVI10" and you'll find them all pretty quickly.

Regards,
   Felix

On 2019-09-24 6:13 p.m., Liu, Shaoyun wrote:
> Add device info for both navi12 PF and VF
>
> Change-Id: Ifb4035e65c12d153fc30e593fe109f9c7e0541f4
> Signed-off-by: shaoyunl 
> ---
>   drivers/gpu/drm/amd/amdkfd/kfd_device.c | 19 +++
>   1 file changed, 19 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c 
> b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> index f329b82..edfbae5c 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> @@ -387,6 +387,24 @@ static const struct kfd_device_info navi10_device_info = 
> {
>   .num_sdma_queues_per_engine = 8,
>   };
>   
> +static const struct kfd_device_info navi12_device_info = {
> + .asic_family = CHIP_NAVI12,
> + .asic_name = "navi12",
> + .max_pasid_bits = 16,
> + .max_no_of_hqd  = 24,
> + .doorbell_size  = 8,
> + .ih_ring_entry_size = 8 * sizeof(uint32_t),
> + .event_interrupt_class = &event_interrupt_class_v9,
> + .num_of_watch_points = 4,
> + .mqd_size_aligned = MQD_SIZE_ALIGNED,
> + .needs_iommu_device = false,
> + .supports_cwsr = true,
> + .needs_pci_atomics = false,
> + .num_sdma_engines = 2,
> + .num_xgmi_sdma_engines = 0,
> + .num_sdma_queues_per_engine = 8,
> +};
> +
>   static const struct kfd_device_info navi14_device_info = {
>   .asic_family = CHIP_NAVI14,
>   .asic_name = "navi14",
> @@ -425,6 +443,7 @@ static const struct kfd_device_info 
> *kfd_supported_devices[][2] = {
>   [CHIP_RENOIR] = {&renoir_device_info, NULL},
>   [CHIP_ARCTURUS] = {&arcturus_device_info, &arcturus_device_info},
>   [CHIP_NAVI10] = {&navi10_device_info, NULL},
> + [CHIP_NAVI12] = {&navi12_device_info, &navi12_device_info},
>   [CHIP_NAVI14] = {&navi14_device_info, NULL},
>   };
>   
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 24/27] drm/amdgpu/dm: Resume short HPD IRQs before resuming MST topology

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:46:02PM -0400, Lyude Paul wrote:
> Since we're going to be reprobing the entire topology state on resume
> now using sideband transactions, we need to ensure that we actually have
> short HPD irqs enabled before calling drm_dp_mst_topology_mgr_resume().
> So, do that.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 
> ---
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 73630e2940d4..4d3c8bff77da 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -1185,15 +1185,15 @@ static int dm_resume(void *handle)
>   /* program HPD filter */
>   dc_resume(dm->dc);
>  
> - /* On resume we need to  rewrite the MSTM control bits to enamble MST*/
> - s3_handle_mst(ddev, false);
> -
>   /*
>* early enable HPD Rx IRQ, should be done before set mode as short
>* pulse interrupts are used for MST
>*/
>   amdgpu_dm_irq_resume_early(adev);
>  
> + /* On resume we need to  rewrite the MSTM control bits to enamble MST*/

While we're here,

s/  / / && s/enamble/enable/ && s_*/_ */_

> + s3_handle_mst(ddev, false);
> +
>   /* Do detection*/
>   drm_connector_list_iter_begin(ddev, &iter);
>   drm_for_each_connector_iter(connector, &iter) {
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS


Re: [PATCH v2 03/27] drm/dp_mst: Destroy MSTBs asynchronously

2019-09-25 Thread Lyude Paul
On Wed, 2019-09-25 at 14:16 -0400, Sean Paul wrote:
> On Tue, Sep 03, 2019 at 04:45:41PM -0400, Lyude Paul wrote:
> > When reprobing an MST topology during resume, we have to account for the
> > fact that while we were suspended it's possible that mstbs may have been
> > removed from any ports in the topology. Since iterating downwards in the
> > topology requires that we hold &mgr->lock, destroying MSTBs from this
> > context would result in attempting to lock &mgr->lock a second time and
> > deadlocking.
> > 
> > So, fix this by first moving destruction of MSTBs into
> > destroy_connector_work, then rename destroy_connector_work and friends
> > to reflect that they now destroy both ports and mstbs.
> > 
> > Changes since v1:
> > * s/destroy_connector_list/destroy_port_list/
> >   s/connector_destroy_lock/delayed_destroy_lock/
> >   s/connector_destroy_work/delayed_destroy_work/
> >   s/drm_dp_finish_destroy_branch_device/drm_dp_delayed_destroy_mstb/
> >   s/drm_dp_finish_destroy_port/drm_dp_delayed_destroy_port/
> >   - danvet
> > * Use two loops in drm_dp_delayed_destroy_work() - danvet
> > * Better explain why we need to do this - danvet
> > * Use cancel_work_sync() instead of flush_work() - flush_work() doesn't
> >   account for work requeing
> > 
> > Cc: Juston Li 
> > Cc: Imre Deak 
> > Cc: Ville Syrjälä 
> > Cc: Harry Wentland 
> > Cc: Daniel Vetter 
> > Signed-off-by: Lyude Paul 
> 
> Took me a while to grok this, and I'm still not 100% confident my mental
> model
> is correct, so please bear with me while I ask silly questions :)
> 
> Now that the destroy is delayed, and the port remains in the topology, is it
> possible we will underflow the topology kref by calling put_mstb multiple
> times?
> It looks like that would result in a WARN from refcount.c, and wouldn't call
> the
> destroy function multiple times, so that's nice :)
> 
> Similarly, is there any defense against calling get_mstb() between destroy()
> and
> the delayed destroy worker running?
> 
Good question! There's only one instance where we unconditionally grab an MSTB
ref, drm_dp_mst_topology_mgr_set_mst(), and in that location we're guaranteed
to be the only one with access to that mstb until we drop &mgr->lock.
Everywhere else we use drm_dp_mst_topology_try_get_mstb(), which uses
kref_get_unless_zero() to protect against that kind of situation (and forces
the caller to check with __must_check)

> Sean
> 
> > ---
> >  drivers/gpu/drm/drm_dp_mst_topology.c | 162 +-
> >  include/drm/drm_dp_mst_helper.h   |  26 +++--
> >  2 files changed, 127 insertions(+), 61 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index 3054ec622506..738f260d4b15 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -1113,34 +1113,17 @@ static void
> > drm_dp_destroy_mst_branch_device(struct kref *kref)
> > struct drm_dp_mst_branch *mstb =
> > container_of(kref, struct drm_dp_mst_branch, topology_kref);
> > struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
> > -   struct drm_dp_mst_port *port, *tmp;
> > -   bool wake_tx = false;
> >  
> > -   mutex_lock(&mgr->lock);
> > -   list_for_each_entry_safe(port, tmp, &mstb->ports, next) {
> > -   list_del(&port->next);
> > -   drm_dp_mst_topology_put_port(port);
> > -   }
> > -   mutex_unlock(&mgr->lock);
> > -
> > -   /* drop any tx slots msg */
> > -   mutex_lock(&mstb->mgr->qlock);
> > -   if (mstb->tx_slots[0]) {
> > -   mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
> > -   mstb->tx_slots[0] = NULL;
> > -   wake_tx = true;
> > -   }
> > -   if (mstb->tx_slots[1]) {
> > -   mstb->tx_slots[1]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
> > -   mstb->tx_slots[1] = NULL;
> > -   wake_tx = true;
> > -   }
> > -   mutex_unlock(&mstb->mgr->qlock);
> > +   INIT_LIST_HEAD(&mstb->destroy_next);
> >  
> > -   if (wake_tx)
> > -   wake_up_all(&mstb->mgr->tx_waitq);
> > -
> > -   drm_dp_mst_put_mstb_malloc(mstb);
> > +   /*
> > +* This can get called under mgr->mutex, so we need to perform the
> > +* actual destruction of the mstb in another worker
> > +*/
> > +   mutex_lock(&mgr->delayed_destroy_lock);
> > +   list_add(&mstb->destroy_next, &mgr->destroy_branch_device_list);
> > +   mutex_unlock(&mgr->delayed_destroy_lock);
> > +   schedule_work(&mgr->delayed_destroy_work);
> >  }
> >  
> >  /**
> > @@ -1255,10 +1238,10 @@ static void drm_dp_destroy_port(struct kref *kref)
> >  * we might be holding the mode_config.mutex
> >  * from an EDID retrieval */
> >  
> > -   mutex_lock(&mgr->destroy_connector_lock);
> > -   list_add(&port->next, &mgr->destroy_connector_list);
> > -   mutex_unlock(&mgr->destroy_connector_lock);
> > -   schedule_work(&mgr->destroy_connector_

Re: [PATCH v2 22/27] drm/nouveau: Don't grab runtime PM refs for HPD IRQs

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:46:00PM -0400, Lyude Paul wrote:
> In order for suspend/resume reprobing to work, we need to be able to
> perform sideband communications during suspend/resume, along with
> runtime PM suspend/resume. In order to do so, we also need to make sure
> that nouveau doesn't bother grabbing a runtime PM reference to do so,
> since otherwise we'll start deadlocking runtime PM again.
> 
> Note that we weren't able to do this before, because of the DP MST
> helpers processing UP requests from topologies in the same context as
> drm_dp_mst_hpd_irq() which would have caused us to open ourselves up to
> receiving hotplug events and deadlocking with runtime suspend/resume.
> Now that those requests are handled asynchronously, this change should
> be completely safe.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Seems reasonable to me, but would feel better if a nouveau person confirmed

Reviewed-by: Sean Paul 


> ---
>  drivers/gpu/drm/nouveau/nouveau_connector.c | 33 +++--
>  1 file changed, 17 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c 
> b/drivers/gpu/drm/nouveau/nouveau_connector.c
> index 56871d34e3fb..f276918d3f3b 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_connector.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
> @@ -1131,6 +1131,16 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
>   const char *name = connector->name;
>   struct nouveau_encoder *nv_encoder;
>   int ret;
> + bool plugged = (rep->mask != NVIF_NOTIFY_CONN_V0_UNPLUG);
> +
> + if (rep->mask & NVIF_NOTIFY_CONN_V0_IRQ) {
> + NV_DEBUG(drm, "service %s\n", name);
> + drm_dp_cec_irq(&nv_connector->aux);
> + if ((nv_encoder = find_encoder(connector, DCB_OUTPUT_DP)))
> + nv50_mstm_service(nv_encoder->dp.mstm);
> +
> + return NVIF_NOTIFY_KEEP;
> + }
>  
>   ret = pm_runtime_get(drm->dev->dev);
>   if (ret == 0) {
> @@ -1151,25 +1161,16 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
>   return NVIF_NOTIFY_DROP;
>   }
>  
> - if (rep->mask & NVIF_NOTIFY_CONN_V0_IRQ) {
> - NV_DEBUG(drm, "service %s\n", name);
> - drm_dp_cec_irq(&nv_connector->aux);
> - if ((nv_encoder = find_encoder(connector, DCB_OUTPUT_DP)))
> - nv50_mstm_service(nv_encoder->dp.mstm);
> - } else {
> - bool plugged = (rep->mask != NVIF_NOTIFY_CONN_V0_UNPLUG);
> -
> + if (!plugged)
> + drm_dp_cec_unset_edid(&nv_connector->aux);
> + NV_DEBUG(drm, "%splugged %s\n", plugged ? "" : "un", name);
> + if ((nv_encoder = find_encoder(connector, DCB_OUTPUT_DP))) {
>   if (!plugged)
> - drm_dp_cec_unset_edid(&nv_connector->aux);
> - NV_DEBUG(drm, "%splugged %s\n", plugged ? "" : "un", name);
> - if ((nv_encoder = find_encoder(connector, DCB_OUTPUT_DP))) {
> - if (!plugged)
> - nv50_mstm_remove(nv_encoder->dp.mstm);
> - }
> -
> - drm_helper_hpd_irq_event(connector->dev);
> + nv50_mstm_remove(nv_encoder->dp.mstm);
>   }
>  
> + drm_helper_hpd_irq_event(connector->dev);
> +
>   pm_runtime_mark_last_busy(drm->dev->dev);
>   pm_runtime_put_autosuspend(drm->dev->dev);
>   return NVIF_NOTIFY_KEEP;
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 21/27] drm/dp_mst: Don't forget to update port->input in drm_dp_mst_handle_conn_stat()

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:59PM -0400, Lyude Paul wrote:
> This probably hasn't caused any problems up until now since it's
> probably nearly impossible to encounter this in the wild, however if we
> were to receive a connection status notification from the MST hub after
> resume while we're in the middle of reprobing the link addresses for a
> topology then there's a much larger chance that a port could have
> changed from being an output port to input port (or vice versa). If we
> forget to update this bit of information, we'll potentially ignore a
> valid PDT change on a downstream port because we think it's an input
> port.
> 
> So, make sure we read the input_port field in connection status
> notifications in drm_dp_mst_handle_conn_stat() to prevent this from
> happening once we've implemented suspend/resume reprobing.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Nice catch! Same comment here re: port->mutex, but we can sort that out on the
other thread

Reviewed-by: Sean Paul 


> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 51 +++
>  1 file changed, 37 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 259634c5d6dc..e407aba1fbd2 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -2078,18 +2078,23 @@ static void
>  drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
>   struct drm_dp_connection_status_notify *conn_stat)
>  {
> - struct drm_device *dev = mstb->mgr->dev;
> + struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
> + struct drm_device *dev = mgr->dev;
>   struct drm_dp_mst_port *port;
> - int old_ddps;
> - bool dowork = false;
> + struct drm_connector *connector_to_destroy = NULL;
> + int old_ddps, ret;
> + u8 new_pdt;
> + bool dowork = false, create_connector = false;
>  
>   port = drm_dp_get_port(mstb, conn_stat->port_number);
>   if (!port)
>   return;
>  
> + mutex_lock(&port->lock);
>   drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
>  
>   old_ddps = port->ddps;
> + port->input = conn_stat->input_port;
>   port->mcs = conn_stat->message_capability_status;
>   port->ldps = conn_stat->legacy_device_plug_status;
>   port->ddps = conn_stat->displayport_device_plug_status;
> @@ -2102,23 +2107,41 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch 
> *mstb,
>   }
>   }
>  
> - if (!port->input) {
> - int ret = drm_dp_port_set_pdt(port,
> -   conn_stat->peer_device_type);
> - if (ret == 1) {
> - dowork = true;
> - } else if (ret < 0) {
> - DRM_ERROR("Failed to change PDT for port %p: %d\n",
> -   port, ret);
> - dowork = false;
> - }
> + new_pdt = port->input ? DP_PEER_DEVICE_NONE : 
> conn_stat->peer_device_type;
> +
> + ret = drm_dp_port_set_pdt(port, new_pdt);
> + if (ret == 1) {
> + dowork = true;
> + } else if (ret < 0) {
> + DRM_ERROR("Failed to change PDT for port %p: %d\n",
> +   port, ret);
> + dowork = false;
> + }
> +
> + /*
> +  * We unset port->connector before dropping connection_mutex so that
> +  * there's no chance any of the atomic MST helpers can accidentally
> +  * associate a to-be-destroyed connector with a port.
> +  */
> + if (port->connector && port->input) {
> + connector_to_destroy = port->connector;
> + port->connector = NULL;
> + } else if (!port->connector && !port->input) {
> + create_connector = true;
>   }
>  
>   drm_modeset_unlock(&dev->mode_config.connection_mutex);
> +
> + if (connector_to_destroy)
> + mgr->cbs->destroy_connector(mgr, connector_to_destroy);
> + else if (create_connector)
> + drm_dp_mst_port_add_connector(mstb, port);
> +
> + mutex_unlock(&port->lock);
> +
>   drm_dp_mst_topology_put_port(port);
>   if (dowork)
>   queue_work(system_long_wq, &mstb->mgr->work);
> -
>  }
>  
>  static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct 
> drm_dp_mst_topology_mgr *mgr,
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS


Re: [PATCH v2 20/27] drm/dp_mst: Protect drm_dp_mst_port members with connection_mutex

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:58PM -0400, Lyude Paul wrote:
> Yes-you read that right. Currently there is literally no locking in
> place for any of the drm_dp_mst_port struct members that can be modified
> in response to a link address response, or a connection status response.
> Which literally means if we're unlucky enough to have any sort of
> hotplugging event happen before we're finished with reprobing link
> addresses, we'll race and the contents of said struct members becomes
> undefined. Fun!
> 
> So, finally add some simple locking protections to our MST helpers by
> protecting any drm_dp_mst_port members which can be changed by link
> address responses or connection status notifications under
> drm_device->mode_config.connection_mutex.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 
> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 144 +++---
>  include/drm/drm_dp_mst_helper.h   |  39 +--
>  2 files changed, 133 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 5101eeab4485..259634c5d6dc 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1354,6 +1354,7 @@ static void drm_dp_free_mst_port(struct kref *kref)
>   container_of(kref, struct drm_dp_mst_port, malloc_kref);
>  
>   drm_dp_mst_put_mstb_malloc(port->parent);
> + mutex_destroy(&port->lock);
>   kfree(port);
>  }
>  
> @@ -1906,6 +1907,36 @@ void drm_dp_mst_connector_early_unregister(struct 
> drm_connector *connector,
>  }
>  EXPORT_SYMBOL(drm_dp_mst_connector_early_unregister);
>  
> +static void
> +drm_dp_mst_port_add_connector(struct drm_dp_mst_branch *mstb,
> +   struct drm_dp_mst_port *port)
> +{
> + struct drm_dp_mst_topology_mgr *mgr = port->mgr;
> + char proppath[255];
> + int ret;
> +
> + build_mst_prop_path(mstb, port->port_num, proppath, sizeof(proppath));
> + port->connector = mgr->cbs->add_connector(mgr, port, proppath);
> + if (!port->connector) {
> + ret = -ENOMEM;
> + goto error;
> + }
> +
> + if ((port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
> +  port->pdt == DP_PEER_DEVICE_SST_SINK) &&
> + port->port_num >= DP_MST_LOGICAL_PORT_0) {
> + port->cached_edid = drm_get_edid(port->connector,
> +  &port->aux.ddc);
> + drm_connector_set_tile_property(port->connector);
> + }
> +
> + mgr->cbs->register_connector(port->connector);
> + return;
> +
> +error:
> + DRM_ERROR("Failed to create connector for port %p: %d\n", port, ret);
> +}
> +
>  static void
>  drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
>   struct drm_device *dev,
> @@ -1913,8 +1944,12 @@ drm_dp_mst_handle_link_address_port(struct 
> drm_dp_mst_branch *mstb,
>  {
>   struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
>   struct drm_dp_mst_port *port;
> - bool created = false;
> - int old_ddps = 0;
> + struct drm_dp_mst_branch *child_mstb = NULL;
> + struct drm_connector *connector_to_destroy = NULL;
> + int old_ddps = 0, ret;
> + u8 new_pdt = DP_PEER_DEVICE_NONE;
> + bool created = false, send_link_addr = false,
> +  create_connector = false;
>  
>   port = drm_dp_get_port(mstb, port_msg->port_number);
>   if (!port) {
> @@ -1923,6 +1958,7 @@ drm_dp_mst_handle_link_address_port(struct 
> drm_dp_mst_branch *mstb,
>   return;
>   kref_init(&port->topology_kref);
>   kref_init(&port->malloc_kref);
> + mutex_init(&port->lock);
>   port->parent = mstb;
>   port->port_num = port_msg->port_number;
>   port->mgr = mgr;
> @@ -1937,11 +1973,17 @@ drm_dp_mst_handle_link_address_port(struct 
> drm_dp_mst_branch *mstb,
>   drm_dp_mst_get_mstb_malloc(mstb);
>  
>   created = true;
> - } else {
> - old_ddps = port->ddps;
>   }
>  
> + mutex_lock(&port->lock);
> + drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
> +
> + if (!created)
> + old_ddps = port->ddps;
> +
>   port->input = port_msg->input_port;
> + if (!port->input)
> + new_pdt = port_msg->peer_device_type;
>   port->mcs = port_msg->mcs;
>   port->ddps = port_msg->ddps;
>   port->ldps = port_msg->legacy_device_plug_status;
> @@ -1969,44 +2011,58 @@ drm_dp_mst_handle_link_address_port(struct 
> drm_dp_mst_branch *mstb,
>   }
>   }
>  
> - if (!port->input) {
> - int ret = drm_dp_port_set_pdt(port,
> -   port_msg->peer_device_type);
> - if (ret == 1) {
> - drm_dp_send_link_addre

Re: [PATCH v2 19/27] drm/dp_mst: Handle UP requests asynchronously

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:57PM -0400, Lyude Paul wrote:
> Once upon a time, hotplugging devices on MST branches actually worked in
> DRM. Now, it only works in amdgpu (likely because of how it's hotplug
> handlers are implemented). On both i915 and nouveau, hotplug
> notifications from MST branches are noticed - but trying to respond to
> them causes messaging timeouts and causes the whole topology state to go
> out of sync with reality, usually resulting in the user needing to
> replug the entire topology in hopes that it actually fixes things.
> 
> The reason for this is because the way we currently handle UP requests
> in MST is completely bogus. drm_dp_mst_handle_up_req() is called from
> drm_dp_mst_hpd_irq(), which is usually called from the driver's hotplug
> handler. Because we handle sending the hotplug event from this function,
> we actually cause the driver's hotplug handler (and in turn, all
> sideband transactions) to block on
> drm_device->mode_config.connection_mutex. This makes it impossible to
> send any sideband messages from the driver's connector probing
> functions, resulting in the aforementioned sideband message timeout.
> 
> There's even more problems with this beyond breaking hotplugging on MST
> branch devices. It also makes it almost impossible to protect
> drm_dp_mst_port struct members under a lock because we then have to
> worry about dealing with all of the lock dependency issues that ensue.
> 
> So, let's finally actually fix this issue by handling the processing of
> up requests asyncronously. This way we can send sideband messages from
> most contexts without having to deal with getting blocked if we hold
> connection_mutex. This also fixes MST branch device hotplugging on i915,
> finally!
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Looks really good!

Reviewed-by: Sean Paul 


> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 146 +++---
>  include/drm/drm_dp_mst_helper.h   |  16 +++
>  2 files changed, 122 insertions(+), 40 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index cfaf9eb7ace9..5101eeab4485 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -46,6 +46,12 @@
>   * protocol. The helpers contain a topology manager and bandwidth manager.
>   * The helpers encapsulate the sending and received of sideband msgs.
>   */
> +struct drm_dp_pending_up_req {
> + struct drm_dp_sideband_msg_hdr hdr;
> + struct drm_dp_sideband_msg_req_body msg;
> + struct list_head next;
> +};
> +
>  static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
> char *buf);
>  
> @@ -3109,6 +3115,7 @@ void drm_dp_mst_topology_mgr_suspend(struct 
> drm_dp_mst_topology_mgr *mgr)
>   drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
>  DP_MST_EN | DP_UPSTREAM_IS_SRC);
>   mutex_unlock(&mgr->lock);
> + flush_work(&mgr->up_req_work);
>   flush_work(&mgr->work);
>   flush_work(&mgr->delayed_destroy_work);
>  }
> @@ -3281,12 +3288,70 @@ static int drm_dp_mst_handle_down_rep(struct 
> drm_dp_mst_topology_mgr *mgr)
>   return 0;
>  }
>  
> +static inline void
> +drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr,
> +   struct drm_dp_pending_up_req *up_req)
> +{
> + struct drm_dp_mst_branch *mstb = NULL;
> + struct drm_dp_sideband_msg_req_body *msg = &up_req->msg;
> + struct drm_dp_sideband_msg_hdr *hdr = &up_req->hdr;
> +
> + if (hdr->broadcast) {
> + const u8 *guid = NULL;
> +
> + if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY)
> + guid = msg->u.conn_stat.guid;
> + else if (msg->req_type == DP_RESOURCE_STATUS_NOTIFY)
> + guid = msg->u.resource_stat.guid;
> +
> + mstb = drm_dp_get_mst_branch_device_by_guid(mgr, guid);
> + } else {
> + mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad);
> + }
> +
> + if (!mstb) {
> + DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",
> +   hdr->lct);
> + return;
> + }
> +
> + /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */
> + if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) {
> + drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat);
> + drm_kms_helper_hotplug_event(mgr->dev);
> + }
> +
> + drm_dp_mst_topology_put_mstb(mstb);
> +}
> +
> +static void drm_dp_mst_up_req_work(struct work_struct *work)
> +{
> + struct drm_dp_mst_topology_mgr *mgr =
> + container_of(work, struct drm_dp_mst_topology_mgr,
> +  up_req_work);
> + struct drm_dp_pending_up_req *up_req;
> +
> + while (true) {
> + mutex_lock(&mgr->up_r

Re: [PATCH v2 18/27] drm/dp_mst: Remove lies in {up,down}_rep_recv documentation

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:56PM -0400, Lyude Paul wrote:
> These are most certainly accessed from far more than the mgr work. In
> fact, up_req_recv is -only- ever accessed from outside the mgr work.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Reviewed-by: Sean Paul 

> ---
>  include/drm/drm_dp_mst_helper.h | 8 ++--
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
> index f253ee43e9d9..8ba2a01324bb 100644
> --- a/include/drm/drm_dp_mst_helper.h
> +++ b/include/drm/drm_dp_mst_helper.h
> @@ -489,15 +489,11 @@ struct drm_dp_mst_topology_mgr {
>   int conn_base_id;
>  
>   /**
> -  * @down_rep_recv: Message receiver state for down replies. This and
> -  * @up_req_recv are only ever access from the work item, which is
> -  * serialised.
> +  * @down_rep_recv: Message receiver state for down replies.
>*/
>   struct drm_dp_sideband_msg_rx down_rep_recv;
>   /**
> -  * @up_req_recv: Message receiver state for up requests. This and
> -  * @down_rep_recv are only ever access from the work item, which is
> -  * serialised.
> +  * @up_req_recv: Message receiver state for up requests.
>*/
>   struct drm_dp_sideband_msg_rx up_req_recv;
>  
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 17/27] drm/dp_mst: Rename drm_dp_add_port and drm_dp_update_port

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:55PM -0400, Lyude Paul wrote:
> The names for these functions are rather confusing. drm_dp_add_port()
> sounds like a function that would simply create a port and add it to a
> topology, and do nothing more. Similarly, drm_dp_update_port() would be
> assumed to be the function that should be used to update port
> information after initial creation.
> 
> While those assumptions are currently correct in how these functions are
> used, a quick glance at drm_dp_add_port() reveals that drm_dp_add_port()
> can also update the information on a port, and seems explicitly designed
> to do so. This can be explained pretty simply by the fact that there's
> more situations that would involve updating the port information based
> on a link address response as opposed to a connection status
> notification than the driver's initial topology probe. Case in point:
> reprobing link addresses after suspend/resume.
> 
> Since we're about to start using drm_dp_add_port() differently for
> suspend/resume reprobing, let's rename both functions to clarify what
> they actually do.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Reviewed-by: Sean Paul 

> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 17 ++---
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 9944ef2ce885..cfaf9eb7ace9 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1900,9 +1900,10 @@ void drm_dp_mst_connector_early_unregister(struct 
> drm_connector *connector,
>  }
>  EXPORT_SYMBOL(drm_dp_mst_connector_early_unregister);
>  
> -static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
> - struct drm_device *dev,
> - struct drm_dp_link_addr_reply_port *port_msg)
> +static void
> +drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
> + struct drm_device *dev,
> + struct drm_dp_link_addr_reply_port 
> *port_msg)
>  {
>   struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
>   struct drm_dp_mst_port *port;
> @@ -2011,8 +2012,9 @@ static void drm_dp_add_port(struct drm_dp_mst_branch 
> *mstb,
>   drm_dp_mst_topology_put_port(port);
>  }
>  
> -static void drm_dp_update_port(struct drm_dp_mst_branch *mstb,
> -struct drm_dp_connection_status_notify 
> *conn_stat)
> +static void
> +drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
> + struct drm_dp_connection_status_notify *conn_stat)
>  {
>   struct drm_dp_mst_port *port;
>   int old_ddps;
> @@ -2464,7 +2466,8 @@ static void drm_dp_send_link_address(struct 
> drm_dp_mst_topology_mgr *mgr,
>   drm_dp_check_mstb_guid(mstb, reply->guid);
>  
>   for (i = 0; i < reply->nports; i++)
> - drm_dp_add_port(mstb, mgr->dev, &reply->ports[i]);
> + drm_dp_mst_handle_link_address_port(mstb, mgr->dev,
> + &reply->ports[i]);
>  
>   drm_kms_helper_hotplug_event(mgr->dev);
>  
> @@ -3324,7 +3327,7 @@ static int drm_dp_mst_handle_up_req(struct 
> drm_dp_mst_topology_mgr *mgr)
>   }
>  
>   if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
> - drm_dp_update_port(mstb, &msg.u.conn_stat);
> + drm_dp_mst_handle_conn_stat(mstb, &msg.u.conn_stat);
>  
>   DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d 
> pdt: %d\n",
> msg.u.conn_stat.port_number,
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 16/27] drm/dp_mst: Refactor pdt setup/teardown, add more locking

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:54PM -0400, Lyude Paul wrote:
> Since we're going to be implementing suspend/resume reprobing very soon,
> we need to make sure we are extra careful to ensure that our locking
> actually protects the topology state where we expect it to. Turns out
> this isn't the case with drm_dp_port_setup_pdt() and
> drm_dp_port_teardown_pdt(), both of which change port->mstb without
> grabbing &mgr->lock.
> 
> Additionally, since most callers of these functions are just using it to
> teardown the port's previous PDT and setup a new one we can simplify
> things a bit and combine drm_dp_port_setup_pdt() and
> drm_dp_port_teardown_pdt() into a single function:
> drm_dp_port_set_pdt(). This function also handles actually ensuring that
> we grab the correct locks when we need to modify port->mstb.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 
> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 181 +++---
>  include/drm/drm_dp_mst_helper.h   |   6 +-
>  2 files changed, 110 insertions(+), 77 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index d1610434a0cb..9944ef2ce885 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1487,24 +1487,6 @@ drm_dp_mst_topology_put_mstb(struct drm_dp_mst_branch 
> *mstb)
>   kref_put(&mstb->topology_kref, drm_dp_destroy_mst_branch_device);
>  }
>  
> -static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int 
> old_pdt)
> -{
> - struct drm_dp_mst_branch *mstb;
> -
> - switch (old_pdt) {
> - case DP_PEER_DEVICE_DP_LEGACY_CONV:
> - case DP_PEER_DEVICE_SST_SINK:
> - /* remove i2c over sideband */
> - drm_dp_mst_unregister_i2c_bus(&port->aux);
> - break;
> - case DP_PEER_DEVICE_MST_BRANCHING:
> - mstb = port->mstb;
> - port->mstb = NULL;
> - drm_dp_mst_topology_put_mstb(mstb);
> - break;
> - }
> -}
> -
>  static void drm_dp_destroy_port(struct kref *kref)
>  {
>   struct drm_dp_mst_port *port =
> @@ -1714,38 +1696,79 @@ static u8 drm_dp_calculate_rad(struct drm_dp_mst_port 
> *port,
>   return parent_lct + 1;
>  }
>  
> -/*
> - * return sends link address for new mstb
> - */
> -static bool drm_dp_port_setup_pdt(struct drm_dp_mst_port *port)
> +static int drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt)
>  {
> - int ret;
> - u8 rad[6], lct;
> - bool send_link = false;
> + struct drm_dp_mst_topology_mgr *mgr = port->mgr;
> + struct drm_dp_mst_branch *mstb;
> + u8 rad[8], lct;
> + int ret = 0;
> +
> + if (port->pdt == new_pdt)

Shouldn't we also ensure that access to port->pdt is also locked?

Sean

> + return 0;
> +
> + /* Teardown the old pdt, if there is one */
> + switch (port->pdt) {
> + case DP_PEER_DEVICE_DP_LEGACY_CONV:
> + case DP_PEER_DEVICE_SST_SINK:
> + /*
> +  * If the new PDT would also have an i2c bus, don't bother
> +  * with reregistering it
> +  */
> + if (new_pdt == DP_PEER_DEVICE_DP_LEGACY_CONV ||
> + new_pdt == DP_PEER_DEVICE_SST_SINK) {
> + port->pdt = new_pdt;
> + return 0;
> + }
> +
> + /* remove i2c over sideband */
> + drm_dp_mst_unregister_i2c_bus(&port->aux);
> + break;
> + case DP_PEER_DEVICE_MST_BRANCHING:
> + mutex_lock(&mgr->lock);
> + drm_dp_mst_topology_put_mstb(port->mstb);
> + port->mstb = NULL;
> + mutex_unlock(&mgr->lock);
> + break;
> + }
> +
> + port->pdt = new_pdt;
>   switch (port->pdt) {
>   case DP_PEER_DEVICE_DP_LEGACY_CONV:
>   case DP_PEER_DEVICE_SST_SINK:
>   /* add i2c over sideband */
>   ret = drm_dp_mst_register_i2c_bus(&port->aux);
>   break;
> +
>   case DP_PEER_DEVICE_MST_BRANCHING:
>   lct = drm_dp_calculate_rad(port, rad);
> + mstb = drm_dp_add_mst_branch_device(lct, rad);
> + if (!mstb) {
> + ret = -ENOMEM;
> + DRM_ERROR("Failed to create MSTB for port %p", port);
> + goto out;
> + }
>  
> - port->mstb = drm_dp_add_mst_branch_device(lct, rad);
> - if (port->mstb) {
> - port->mstb->mgr = port->mgr;
> - port->mstb->port_parent = port;
> - /*
> -  * Make sure this port's memory allocation stays
> -  * around until its child MSTB releases it
> -  */
> - drm_dp_mst_get_port_malloc(port);
> + mutex_lock(&mgr->lock);
> + port

Re: [PATCH v2 14/27] drm/dp_mst: Destroy topology_mgr mutexes

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:52PM -0400, Lyude Paul wrote:
> Turns out we've been forgetting for a while now to actually destroy any
> of the mutexes that we create in drm_dp_mst_topology_mgr. So, let's do
> that.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Cleanup is overrated :)

Reviewed-by: Sean Paul 


> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 74161f442584..2f88cc173500 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -4339,6 +4339,11 @@ void drm_dp_mst_topology_mgr_destroy(struct 
> drm_dp_mst_topology_mgr *mgr)
>   mgr->aux = NULL;
>   drm_atomic_private_obj_fini(&mgr->base);
>   mgr->funcs = NULL;
> +
> + mutex_destroy(&mgr->delayed_destroy_lock);
> + mutex_destroy(&mgr->payload_lock);
> + mutex_destroy(&mgr->qlock);
> + mutex_destroy(&mgr->lock);
>  }
>  EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy);
>  
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 08/27] drm/dp_mst: Remove PDT teardown in drm_dp_destroy_port() and refactor

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:46PM -0400, Lyude Paul wrote:
> This will allow us to add some locking for port->* members, in
> particular the PDT and ->connector, which can't be done from
> drm_dp_destroy_port() since we don't know what locks the caller might be
> holding.

Might be nice to mention that this is already done in the delayed destroy worker
so readers don't need to go looking for it. Perhaps update this when you apply
the patch.

> 
> Changes since v2:
> * Clarify commit message
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Reviewed-by: Sean Paul 

> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 40 +++
>  1 file changed, 16 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index f5f1d8b50fb6..af3189df28aa 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1511,31 +1511,22 @@ static void drm_dp_destroy_port(struct kref *kref)
>   container_of(kref, struct drm_dp_mst_port, topology_kref);
>   struct drm_dp_mst_topology_mgr *mgr = port->mgr;
>  
> - if (!port->input) {
> - kfree(port->cached_edid);
> + /* There's nothing that needs locking to destroy an input port yet */
> + if (port->input) {
> + drm_dp_mst_put_port_malloc(port);
> + return;
> + }
>  
> - /*
> -  * The only time we don't have a connector
> -  * on an output port is if the connector init
> -  * fails.
> -  */
> - if (port->connector) {
> - /* we can't destroy the connector here, as
> -  * we might be holding the mode_config.mutex
> -  * from an EDID retrieval */
> + kfree(port->cached_edid);
>  
> - mutex_lock(&mgr->delayed_destroy_lock);
> - list_add(&port->next, &mgr->destroy_port_list);
> - mutex_unlock(&mgr->delayed_destroy_lock);
> - schedule_work(&mgr->delayed_destroy_work);
> - return;
> - }
> - /* no need to clean up vcpi
> -  * as if we have no connector we never setup a vcpi */
> - drm_dp_port_teardown_pdt(port, port->pdt);
> - port->pdt = DP_PEER_DEVICE_NONE;
> - }
> - drm_dp_mst_put_port_malloc(port);
> + /*
> +  * we can't destroy the connector here, as we might be holding the
> +  * mode_config.mutex from an EDID retrieval
> +  */
> + mutex_lock(&mgr->delayed_destroy_lock);
> + list_add(&port->next, &mgr->destroy_port_list);
> + mutex_unlock(&mgr->delayed_destroy_lock);
> + schedule_work(&mgr->delayed_destroy_work);
>  }
>  
>  /**
> @@ -3998,7 +3989,8 @@ static void drm_dp_tx_work(struct work_struct *work)
>  static inline void
>  drm_dp_delayed_destroy_port(struct drm_dp_mst_port *port)
>  {
> - port->mgr->cbs->destroy_connector(port->mgr, port->connector);
> + if (port->connector)
> + port->mgr->cbs->destroy_connector(port->mgr, port->connector);
>  
>   drm_dp_port_teardown_pdt(port, port->pdt);
>   port->pdt = DP_PEER_DEVICE_NONE;
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS


Re: [PATCH 3/3] drm/amdkfd: Remove the control stack workaround for GFX10

2019-09-25 Thread Zhao, Yong
Yes. I confirmed with CP guys and they said the behavior on GFX10 is the 
same as GFX8 now. I remember that the workaround on GFX9 was to help 
with a HW bug, but not too sure.

Regards,

Yong

On 2019-09-25 2:25 p.m., Kuehling, Felix wrote:
> On 2019-09-25 2:15 p.m., Zhao, Yong wrote:
>> The GFX10 does not have this hardware bug any more, so remove it.
> I wouldn't call this a bug and a workaround. More like a change in the
> HW or FW behaviour and a corresponding driver change. I.e. in GFXv8 the
> control stack was in the user mode CWSR allocation. In GFXv9 it moved
> into a kernel mode buffer next to the MQD. So in GFXv10 the control
> stack moved back into the user mode CWSR buffer?
>
> Regards,
>     Felix
>
>> Change-Id: I446c9685549a09ac8846a42ee22d86cfb93fd98c
>> Signed-off-by: Yong Zhao 
>> ---
>>.../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 37 ++-
>>1 file changed, 4 insertions(+), 33 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c 
>> b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
>> index 9cd3eb2d90bd..4a236b2c2354 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
>> @@ -69,35 +69,13 @@ static void update_cu_mask(struct mqd_manager *mm, void 
>> *mqd,
>>static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
>>  struct queue_properties *q)
>>{
>> -int retval;
>> -struct kfd_mem_obj *mqd_mem_obj = NULL;
>> +struct kfd_mem_obj *mqd_mem_obj;
>>
>> -/* From V9,  for CWSR, the control stack is located on the next page
>> - * boundary after the mqd, we will use the gtt allocation function
>> - * instead of sub-allocation function.
>> - */
>> -if (kfd->cwsr_enabled && (q->type == KFD_QUEUE_TYPE_COMPUTE)) {
>> -mqd_mem_obj = kzalloc(sizeof(struct kfd_mem_obj), GFP_NOIO);
>> -if (!mqd_mem_obj)
>> -return NULL;
>> -retval = amdgpu_amdkfd_alloc_gtt_mem(kfd->kgd,
>> -ALIGN(q->ctl_stack_size, PAGE_SIZE) +
>> -ALIGN(sizeof(struct v10_compute_mqd), 
>> PAGE_SIZE),
>> -&(mqd_mem_obj->gtt_mem),
>> -&(mqd_mem_obj->gpu_addr),
>> -(void *)&(mqd_mem_obj->cpu_ptr), true);
>> -} else {
>> -retval = kfd_gtt_sa_allocate(kfd, sizeof(struct 
>> v10_compute_mqd),
>> -&mqd_mem_obj);
>> -}
>> -
>> -if (retval) {
>> -kfree(mqd_mem_obj);
>> +if (kfd_gtt_sa_allocate(kfd, sizeof(struct v10_compute_mqd),
>> +&mqd_mem_obj))
>>  return NULL;
>> -}
>>
>>  return mqd_mem_obj;
>> -
>>}
>>
>>static void init_mqd(struct mqd_manager *mm, void **mqd,
>> @@ -250,14 +228,7 @@ static int destroy_mqd(struct mqd_manager *mm, void 
>> *mqd,
>>static void free_mqd(struct mqd_manager *mm, void *mqd,
>>  struct kfd_mem_obj *mqd_mem_obj)
>>{
>> -struct kfd_dev *kfd = mm->dev;
>> -
>> -if (mqd_mem_obj->gtt_mem) {
>> -amdgpu_amdkfd_free_gtt_mem(kfd->kgd, mqd_mem_obj->gtt_mem);
>> -kfree(mqd_mem_obj);
>> -} else {
>> -kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
>> -}
>> +kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
>>}
>>
>>static bool is_occupied(struct mqd_manager *mm, void *mqd,
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 3/3] drm/amdkfd: Remove the control stack workaround for GFX10

2019-09-25 Thread Kuehling, Felix
On 2019-09-25 2:15 p.m., Zhao, Yong wrote:
> The GFX10 does not have this hardware bug any more, so remove it.

I wouldn't call this a bug and a workaround. More like a change in the 
HW or FW behaviour and a corresponding driver change. I.e. in GFXv8 the 
control stack was in the user mode CWSR allocation. In GFXv9 it moved 
into a kernel mode buffer next to the MQD. So in GFXv10 the control 
stack moved back into the user mode CWSR buffer?

Regards,
   Felix

>
> Change-Id: I446c9685549a09ac8846a42ee22d86cfb93fd98c
> Signed-off-by: Yong Zhao 
> ---
>   .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 37 ++-
>   1 file changed, 4 insertions(+), 33 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c 
> b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
> index 9cd3eb2d90bd..4a236b2c2354 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
> @@ -69,35 +69,13 @@ static void update_cu_mask(struct mqd_manager *mm, void 
> *mqd,
>   static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
>   struct queue_properties *q)
>   {
> - int retval;
> - struct kfd_mem_obj *mqd_mem_obj = NULL;
> + struct kfd_mem_obj *mqd_mem_obj;
>   
> - /* From V9,  for CWSR, the control stack is located on the next page
> -  * boundary after the mqd, we will use the gtt allocation function
> -  * instead of sub-allocation function.
> -  */
> - if (kfd->cwsr_enabled && (q->type == KFD_QUEUE_TYPE_COMPUTE)) {
> - mqd_mem_obj = kzalloc(sizeof(struct kfd_mem_obj), GFP_NOIO);
> - if (!mqd_mem_obj)
> - return NULL;
> - retval = amdgpu_amdkfd_alloc_gtt_mem(kfd->kgd,
> - ALIGN(q->ctl_stack_size, PAGE_SIZE) +
> - ALIGN(sizeof(struct v10_compute_mqd), 
> PAGE_SIZE),
> - &(mqd_mem_obj->gtt_mem),
> - &(mqd_mem_obj->gpu_addr),
> - (void *)&(mqd_mem_obj->cpu_ptr), true);
> - } else {
> - retval = kfd_gtt_sa_allocate(kfd, sizeof(struct 
> v10_compute_mqd),
> - &mqd_mem_obj);
> - }
> -
> - if (retval) {
> - kfree(mqd_mem_obj);
> + if (kfd_gtt_sa_allocate(kfd, sizeof(struct v10_compute_mqd),
> + &mqd_mem_obj))
>   return NULL;
> - }
>   
>   return mqd_mem_obj;
> -
>   }
>   
>   static void init_mqd(struct mqd_manager *mm, void **mqd,
> @@ -250,14 +228,7 @@ static int destroy_mqd(struct mqd_manager *mm, void *mqd,
>   static void free_mqd(struct mqd_manager *mm, void *mqd,
>   struct kfd_mem_obj *mqd_mem_obj)
>   {
> - struct kfd_dev *kfd = mm->dev;
> -
> - if (mqd_mem_obj->gtt_mem) {
> - amdgpu_amdkfd_free_gtt_mem(kfd->kgd, mqd_mem_obj->gtt_mem);
> - kfree(mqd_mem_obj);
> - } else {
> - kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
> - }
> + kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
>   }
>   
>   static bool is_occupied(struct mqd_manager *mm, void *mqd,
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 04/27] drm/dp_mst: Move test_calc_pbn_mode() into an actual selftest

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:42PM -0400, Lyude Paul wrote:
> Yes, apparently we've been testing this for every single driver load for
> quite a long time now. At least that means our PBN calculation is solid!
> 
> Anyway, introduce self tests for MST and move this into there.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Reviewed-by: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Reviewed-by: Sean Paul 

> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 27 ---
>  drivers/gpu/drm/selftests/Makefile|  2 +-
>  .../gpu/drm/selftests/drm_modeset_selftests.h |  1 +
>  .../drm/selftests/test-drm_dp_mst_helper.c| 34 +++
>  .../drm/selftests/test-drm_modeset_common.h   |  1 +
>  5 files changed, 37 insertions(+), 28 deletions(-)
>  create mode 100644 drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 738f260d4b15..6f7f449ca12b 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -47,7 +47,6 @@
>   */
>  static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
> char *buf);
> -static int test_calc_pbn_mode(void);
>  
>  static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port);
>  
> @@ -3561,30 +3560,6 @@ int drm_dp_calc_pbn_mode(int clock, int bpp)
>  }
>  EXPORT_SYMBOL(drm_dp_calc_pbn_mode);
>  
> -static int test_calc_pbn_mode(void)
> -{
> - int ret;
> - ret = drm_dp_calc_pbn_mode(154000, 30);
> - if (ret != 689) {
> - DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, 
> expected PBN %d, actual PBN %d.\n",
> - 154000, 30, 689, ret);
> - return -EINVAL;
> - }
> - ret = drm_dp_calc_pbn_mode(234000, 30);
> - if (ret != 1047) {
> - DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, 
> expected PBN %d, actual PBN %d.\n",
> - 234000, 30, 1047, ret);
> - return -EINVAL;
> - }
> - ret = drm_dp_calc_pbn_mode(297000, 24);
> - if (ret != 1063) {
> - DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, 
> expected PBN %d, actual PBN %d.\n",
> - 297000, 24, 1063, ret);
> - return -EINVAL;
> - }
> - return 0;
> -}
> -
>  /* we want to kick the TX after we've ack the up/down IRQs. */
>  static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr)
>  {
> @@ -4033,8 +4008,6 @@ int drm_dp_mst_topology_mgr_init(struct 
> drm_dp_mst_topology_mgr *mgr,
>   if (!mgr->proposed_vcpis)
>   return -ENOMEM;
>   set_bit(0, &mgr->payload_mask);
> - if (test_calc_pbn_mode() < 0)
> - DRM_ERROR("MST PBN self-test failed\n");
>  
>   mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
>   if (mst_state == NULL)
> diff --git a/drivers/gpu/drm/selftests/Makefile 
> b/drivers/gpu/drm/selftests/Makefile
> index aae88f8a016c..d2137342b371 100644
> --- a/drivers/gpu/drm/selftests/Makefile
> +++ b/drivers/gpu/drm/selftests/Makefile
> @@ -1,6 +1,6 @@
>  # SPDX-License-Identifier: GPL-2.0-only
>  test-drm_modeset-y := test-drm_modeset_common.o test-drm_plane_helper.o \
>test-drm_format.o test-drm_framebuffer.o \
> -   test-drm_damage_helper.o
> +   test-drm_damage_helper.o test-drm_dp_mst_helper.o
>  
>  obj-$(CONFIG_DRM_DEBUG_SELFTEST) += test-drm_mm.o test-drm_modeset.o 
> test-drm_cmdline_parser.o
> diff --git a/drivers/gpu/drm/selftests/drm_modeset_selftests.h 
> b/drivers/gpu/drm/selftests/drm_modeset_selftests.h
> index 464753746013..dec3ee3ec96f 100644
> --- a/drivers/gpu/drm/selftests/drm_modeset_selftests.h
> +++ b/drivers/gpu/drm/selftests/drm_modeset_selftests.h
> @@ -32,3 +32,4 @@ selftest(damage_iter_damage_one_intersect, 
> igt_damage_iter_damage_one_intersect)
>  selftest(damage_iter_damage_one_outside, igt_damage_iter_damage_one_outside)
>  selftest(damage_iter_damage_src_moved, igt_damage_iter_damage_src_moved)
>  selftest(damage_iter_damage_not_visible, igt_damage_iter_damage_not_visible)
> +selftest(dp_mst_calc_pbn_mode, igt_dp_mst_calc_pbn_mode)
> diff --git a/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c 
> b/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
> new file mode 100644
> index ..9baa5171988d
> --- /dev/null
> +++ b/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
> @@ -0,0 +1,34 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Test cases for for the DRM DP MST helpers
> + */
> +
> +#include 
> +#include 
> +
> +#include "test-drm_modeset_common.h"
> +
> +int igt_dp_mst_calc_pbn_mode(void *ignored)
> +{
> + int pbn, i;
> + const struct {
> + int rate;
> + int bpp;
> + int expected;
> + } test_params[] = {
> + { 

Re: [PATCH v2 03/27] drm/dp_mst: Destroy MSTBs asynchronously

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:41PM -0400, Lyude Paul wrote:
> When reprobing an MST topology during resume, we have to account for the
> fact that while we were suspended it's possible that mstbs may have been
> removed from any ports in the topology. Since iterating downwards in the
> topology requires that we hold &mgr->lock, destroying MSTBs from this
> context would result in attempting to lock &mgr->lock a second time and
> deadlocking.
> 
> So, fix this by first moving destruction of MSTBs into
> destroy_connector_work, then rename destroy_connector_work and friends
> to reflect that they now destroy both ports and mstbs.
> 
> Changes since v1:
> * s/destroy_connector_list/destroy_port_list/
>   s/connector_destroy_lock/delayed_destroy_lock/
>   s/connector_destroy_work/delayed_destroy_work/
>   s/drm_dp_finish_destroy_branch_device/drm_dp_delayed_destroy_mstb/
>   s/drm_dp_finish_destroy_port/drm_dp_delayed_destroy_port/
>   - danvet
> * Use two loops in drm_dp_delayed_destroy_work() - danvet
> * Better explain why we need to do this - danvet
> * Use cancel_work_sync() instead of flush_work() - flush_work() doesn't
>   account for work requeing
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Cc: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Took me a while to grok this, and I'm still not 100% confident my mental model
is correct, so please bear with me while I ask silly questions :)

Now that the destroy is delayed, and the port remains in the topology, is it
possible we will underflow the topology kref by calling put_mstb multiple times?
It looks like that would result in a WARN from refcount.c, and wouldn't call the
destroy function multiple times, so that's nice :)

Similarly, is there any defense against calling get_mstb() between destroy() and
the delayed destroy worker running?

Sean

> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 162 +-
>  include/drm/drm_dp_mst_helper.h   |  26 +++--
>  2 files changed, 127 insertions(+), 61 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 3054ec622506..738f260d4b15 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1113,34 +1113,17 @@ static void drm_dp_destroy_mst_branch_device(struct 
> kref *kref)
>   struct drm_dp_mst_branch *mstb =
>   container_of(kref, struct drm_dp_mst_branch, topology_kref);
>   struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
> - struct drm_dp_mst_port *port, *tmp;
> - bool wake_tx = false;
>  
> - mutex_lock(&mgr->lock);
> - list_for_each_entry_safe(port, tmp, &mstb->ports, next) {
> - list_del(&port->next);
> - drm_dp_mst_topology_put_port(port);
> - }
> - mutex_unlock(&mgr->lock);
> -
> - /* drop any tx slots msg */
> - mutex_lock(&mstb->mgr->qlock);
> - if (mstb->tx_slots[0]) {
> - mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
> - mstb->tx_slots[0] = NULL;
> - wake_tx = true;
> - }
> - if (mstb->tx_slots[1]) {
> - mstb->tx_slots[1]->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
> - mstb->tx_slots[1] = NULL;
> - wake_tx = true;
> - }
> - mutex_unlock(&mstb->mgr->qlock);
> + INIT_LIST_HEAD(&mstb->destroy_next);
>  
> - if (wake_tx)
> - wake_up_all(&mstb->mgr->tx_waitq);
> -
> - drm_dp_mst_put_mstb_malloc(mstb);
> + /*
> +  * This can get called under mgr->mutex, so we need to perform the
> +  * actual destruction of the mstb in another worker
> +  */
> + mutex_lock(&mgr->delayed_destroy_lock);
> + list_add(&mstb->destroy_next, &mgr->destroy_branch_device_list);
> + mutex_unlock(&mgr->delayed_destroy_lock);
> + schedule_work(&mgr->delayed_destroy_work);
>  }
>  
>  /**
> @@ -1255,10 +1238,10 @@ static void drm_dp_destroy_port(struct kref *kref)
>* we might be holding the mode_config.mutex
>* from an EDID retrieval */
>  
> - mutex_lock(&mgr->destroy_connector_lock);
> - list_add(&port->next, &mgr->destroy_connector_list);
> - mutex_unlock(&mgr->destroy_connector_lock);
> - schedule_work(&mgr->destroy_connector_work);
> + mutex_lock(&mgr->delayed_destroy_lock);
> + list_add(&port->next, &mgr->destroy_port_list);
> + mutex_unlock(&mgr->delayed_destroy_lock);
> + schedule_work(&mgr->delayed_destroy_work);
>   return;
>   }
>   /* no need to clean up vcpi
> @@ -2792,7 +2775,7 @@ void drm_dp_mst_topology_mgr_suspend(struct 
> drm_dp_mst_topology_mgr *mgr)
>  DP_MST_EN | DP_UPSTREAM_IS_SRC);
>   mutex_unlock(&mgr->lock);
>   flush_work(&mgr->work);
> -  

[PATCH 3/3] drm/amdkfd: Remove the control stack workaround for GFX10

2019-09-25 Thread Zhao, Yong
The GFX10 does not have this hardware bug any more, so remove it.

Change-Id: I446c9685549a09ac8846a42ee22d86cfb93fd98c
Signed-off-by: Yong Zhao 
---
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c  | 37 ++-
 1 file changed, 4 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
index 9cd3eb2d90bd..4a236b2c2354 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
@@ -69,35 +69,13 @@ static void update_cu_mask(struct mqd_manager *mm, void 
*mqd,
 static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
struct queue_properties *q)
 {
-   int retval;
-   struct kfd_mem_obj *mqd_mem_obj = NULL;
+   struct kfd_mem_obj *mqd_mem_obj;
 
-   /* From V9,  for CWSR, the control stack is located on the next page
-* boundary after the mqd, we will use the gtt allocation function
-* instead of sub-allocation function.
-*/
-   if (kfd->cwsr_enabled && (q->type == KFD_QUEUE_TYPE_COMPUTE)) {
-   mqd_mem_obj = kzalloc(sizeof(struct kfd_mem_obj), GFP_NOIO);
-   if (!mqd_mem_obj)
-   return NULL;
-   retval = amdgpu_amdkfd_alloc_gtt_mem(kfd->kgd,
-   ALIGN(q->ctl_stack_size, PAGE_SIZE) +
-   ALIGN(sizeof(struct v10_compute_mqd), 
PAGE_SIZE),
-   &(mqd_mem_obj->gtt_mem),
-   &(mqd_mem_obj->gpu_addr),
-   (void *)&(mqd_mem_obj->cpu_ptr), true);
-   } else {
-   retval = kfd_gtt_sa_allocate(kfd, sizeof(struct 
v10_compute_mqd),
-   &mqd_mem_obj);
-   }
-
-   if (retval) {
-   kfree(mqd_mem_obj);
+   if (kfd_gtt_sa_allocate(kfd, sizeof(struct v10_compute_mqd),
+   &mqd_mem_obj))
return NULL;
-   }
 
return mqd_mem_obj;
-
 }
 
 static void init_mqd(struct mqd_manager *mm, void **mqd,
@@ -250,14 +228,7 @@ static int destroy_mqd(struct mqd_manager *mm, void *mqd,
 static void free_mqd(struct mqd_manager *mm, void *mqd,
struct kfd_mem_obj *mqd_mem_obj)
 {
-   struct kfd_dev *kfd = mm->dev;
-
-   if (mqd_mem_obj->gtt_mem) {
-   amdgpu_amdkfd_free_gtt_mem(kfd->kgd, mqd_mem_obj->gtt_mem);
-   kfree(mqd_mem_obj);
-   } else {
-   kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
-   }
+   kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
 }
 
 static bool is_occupied(struct mqd_manager *mm, void *mqd,
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 2/3] drm/amdkfd: Use setup_vm_pt_regs function from base driver in KFD

2019-09-25 Thread Zhao, Yong
This was done on GFX9 previously, now do it for GFX10.

Change-Id: I4442e60534c59bc9526a673559f018ba8058deac
Signed-off-by: Yong Zhao 
---
 .../drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c| 23 +++
 1 file changed, 3 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
index fe5b702c75ce..64568ed32793 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
@@ -42,6 +42,7 @@
 #include "v10_structs.h"
 #include "nv.h"
 #include "nvd.h"
+#include "gfxhub_v2_0.h"
 
 enum hqd_dequeue_request_type {
NO_ACTION = 0,
@@ -251,11 +252,6 @@ static int kgd_set_pasid_vmid_mapping(struct kgd_dev *kgd, 
unsigned int pasid,
ATC_VMID0_PASID_MAPPING__VALID_MASK;
 
pr_debug("pasid 0x%x vmid %d, reg value %x\n", pasid, vmid, 
pasid_mapping);
-   /*
-* need to do this twice, once for gfx and once for mmhub
-* for ATC add 16 to VMID for mmhub, for IH different registers.
-* ATC_VMID0..15 registers are separate from ATC_VMID16..31.
-*/
 
pr_debug("ATHUB, reg %x\n", SOC15_REG_OFFSET(ATHUB, 0, 
mmATC_VMID0_PASID_MAPPING) + vmid);
WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATC_VMID0_PASID_MAPPING) + vmid,
@@ -910,7 +906,6 @@ static void set_vm_context_page_table_base(struct kgd_dev 
*kgd, uint32_t vmid,
uint64_t page_table_base)
 {
struct amdgpu_device *adev = get_amdgpu_device(kgd);
-   uint64_t base = page_table_base | AMDGPU_PTE_VALID;
 
if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
pr_err("trying to set page table base for wrong VMID %u\n",
@@ -918,18 +913,6 @@ static void set_vm_context_page_table_base(struct kgd_dev 
*kgd, uint32_t vmid,
return;
}
 
-   /* TODO: take advantage of per-process address space size. For
-* now, all processes share the same address space size, like
-* on GFX8 and older.
-*/
-   WREG32(SOC15_REG_OFFSET(GC, 0, 
mmGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32) + (vmid*2), 0);
-   WREG32(SOC15_REG_OFFSET(GC, 0, 
mmGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32) + (vmid*2), 0);
-
-   WREG32(SOC15_REG_OFFSET(GC, 0, 
mmGCVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32) + (vmid*2),
-   lower_32_bits(adev->vm_manager.max_pfn - 1));
-   WREG32(SOC15_REG_OFFSET(GC, 0, 
mmGCVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32) + (vmid*2),
-   upper_32_bits(adev->vm_manager.max_pfn - 1));
-
-   WREG32(SOC15_REG_OFFSET(GC, 0, 
mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32) + (vmid*2), lower_32_bits(base));
-   WREG32(SOC15_REG_OFFSET(GC, 0, 
mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32) + (vmid*2), upper_32_bits(base));
+   /* SDMA is on gfxhub as well on Navi1* series */
+   gfxhub_v2_0_setup_vm_pt_regs(adev, vmid, page_table_base);
 }
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH 1/3] drm/amdgpu: Export setup_vm_pt_regs() logic for gfxhub 2.0

2019-09-25 Thread Zhao, Yong
The KFD code will call this function later.

Change-Id: I88a53368cdee719b2c75393e5cdbd8290584548e
Signed-off-by: Yong Zhao 
---
 drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c | 20 
 drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.h |  2 ++
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
index a9238735d361..b601c6740ef5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
@@ -46,21 +46,25 @@ u64 gfxhub_v2_0_get_mc_fb_offset(struct amdgpu_device *adev)
return (u64)RREG32_SOC15(GC, 0, mmGCMC_VM_FB_OFFSET) << 24;
 }
 
-static void gfxhub_v2_0_init_gart_pt_regs(struct amdgpu_device *adev)
+void gfxhub_v2_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
+   uint64_t page_table_base)
 {
-   uint64_t value = amdgpu_gmc_pd_addr(adev->gart.bo);
+   /* two registers distance between mmGCVM_CONTEXT0_* to 
mmGCVM_CONTEXT1_* */
+   int offset = mmGCVM_CONTEXT1_PAGE_TABLE_BASE_ADDR_LO32
+   - mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32;
 
+   WREG32_SOC15_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32,
+   offset * vmid, lower_32_bits(page_table_base));
 
-   WREG32_SOC15(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32,
-lower_32_bits(value));
-
-   WREG32_SOC15(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32,
-upper_32_bits(value));
+   WREG32_SOC15_OFFSET(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32,
+   offset * vmid, upper_32_bits(page_table_base));
 }
 
 static void gfxhub_v2_0_init_gart_aperture_regs(struct amdgpu_device *adev)
 {
-   gfxhub_v2_0_init_gart_pt_regs(adev);
+   uint64_t pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
+
+   gfxhub_v2_0_setup_vm_pt_regs(adev, 0, pt_base);
 
WREG32_SOC15(GC, 0, mmGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32,
 (u32)(adev->gmc.gart_start >> 12));
diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.h 
b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.h
index 06807940748b..392b8cd94fc0 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.h
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.h
@@ -31,5 +31,7 @@ void gfxhub_v2_0_set_fault_enable_default(struct 
amdgpu_device *adev,
  bool value);
 void gfxhub_v2_0_init(struct amdgpu_device *adev);
 u64 gfxhub_v2_0_get_mc_fb_offset(struct amdgpu_device *adev);
+void gfxhub_v2_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
+   uint64_t page_table_base);
 
 #endif
-- 
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 02/27] drm/dp_mst: Get rid of list clear in destroy_connector_work

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:40PM -0400, Lyude Paul wrote:
> This seems to be some leftover detritus from before the port/mstb kref
> cleanup and doesn't do anything anymore, so get rid of it.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Reviewed-by: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Reviewed-by: Sean Paul 

> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 36db66a0ddb1..3054ec622506 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -3760,8 +3760,6 @@ static void drm_dp_destroy_connector_work(struct 
> work_struct *work)
>   list_del(&port->next);
>   mutex_unlock(&mgr->destroy_connector_lock);
>  
> - INIT_LIST_HEAD(&port->next);
> -
>   mgr->cbs->destroy_connector(mgr, port->connector);
>  
>   drm_dp_port_teardown_pdt(port, port->pdt);
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 01/27] drm/dp_mst: Move link address dumping into a function

2019-09-25 Thread Sean Paul
On Tue, Sep 03, 2019 at 04:45:39PM -0400, Lyude Paul wrote:
> Makes things easier to read.
> 
> Cc: Juston Li 
> Cc: Imre Deak 
> Cc: Ville Syrjälä 
> Cc: Harry Wentland 
> Reviewed-by: Daniel Vetter 
> Signed-off-by: Lyude Paul 

Reviewed-by: Sean Paul 

> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 35 ++-
>  1 file changed, 23 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 82add736e17d..36db66a0ddb1 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -2103,6 +2103,28 @@ static void drm_dp_queue_down_tx(struct 
> drm_dp_mst_topology_mgr *mgr,
>   mutex_unlock(&mgr->qlock);
>  }
>  
> +static void
> +drm_dp_dump_link_address(struct drm_dp_link_address_ack_reply *reply)
> +{
> + struct drm_dp_link_addr_reply_port *port_reply;
> + int i;
> +
> + for (i = 0; i < reply->nports; i++) {
> + port_reply = &reply->ports[i];
> + DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: 
> %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n",
> +   i,
> +   port_reply->input_port,
> +   port_reply->peer_device_type,
> +   port_reply->port_number,
> +   port_reply->dpcd_revision,
> +   port_reply->mcs,
> +   port_reply->ddps,
> +   port_reply->legacy_device_plug_status,
> +   port_reply->num_sdp_streams,
> +   port_reply->num_sdp_stream_sinks);
> + }
> +}
> +
>  static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
>struct drm_dp_mst_branch *mstb)
>  {
> @@ -2128,18 +2150,7 @@ static void drm_dp_send_link_address(struct 
> drm_dp_mst_topology_mgr *mgr,
>   DRM_DEBUG_KMS("link address nak received\n");
>   } else {
>   DRM_DEBUG_KMS("link address reply: %d\n", 
> txmsg->reply.u.link_addr.nports);
> - for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) {
> - DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: 
> %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n", i,
> -
> txmsg->reply.u.link_addr.ports[i].input_port,
> -
> txmsg->reply.u.link_addr.ports[i].peer_device_type,
> -
> txmsg->reply.u.link_addr.ports[i].port_number,
> -
> txmsg->reply.u.link_addr.ports[i].dpcd_revision,
> -txmsg->reply.u.link_addr.ports[i].mcs,
> -txmsg->reply.u.link_addr.ports[i].ddps,
> -
> txmsg->reply.u.link_addr.ports[i].legacy_device_plug_status,
> -
> txmsg->reply.u.link_addr.ports[i].num_sdp_streams,
> -
> txmsg->reply.u.link_addr.ports[i].num_sdp_stream_sinks);
> - }
> + drm_dp_dump_link_address(&txmsg->reply.u.link_addr);
>  
>   drm_dp_check_mstb_guid(mstb, 
> txmsg->reply.u.link_addr.guid);
>  
> -- 
> 2.21.0
> 

-- 
Sean Paul, Software Engineer, Google / Chromium OS
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v3 10/11] drm/amdgpu: job is secure iff CS is secure (v4)

2019-09-25 Thread Tuikov, Luben
On 2019-09-25 10:54, Huang, Ray wrote:
>> -Original Message-
>> From: Koenig, Christian 
>> Sent: Wednesday, September 25, 2019 10:47 PM
>> To: Huang, Ray ; amd-gfx@lists.freedesktop.org; dri-
>> de...@lists.freedesktop.org; Deucher, Alexander
>> 
>> Cc: Tuikov, Luben ; Liu, Aaron
>> 
>> Subject: Re: [PATCH v3 10/11] drm/amdgpu: job is secure iff CS is secure (v4)
>>
>> Am 25.09.19 um 16:38 schrieb Huang, Ray:
>>> Mark a job as secure, if and only if the command submission flag has
>>> the secure flag set.
>>>
>>> v2: fix the null job pointer while in vmid 0 submission.
>>> v3: Context --> Command submission.
>>> v4: filling cs parser with cs->in.flags
>>>
>>> Signed-off-by: Huang Rui 
>>> Co-developed-by: Luben Tuikov 
>>> Signed-off-by: Luben Tuikov 
>>> Reviewed-by: Alex Deucher 
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h |  3 +++
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 11 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  |  4 ++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
>>>   4 files changed, 17 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> index 697e8e5..fd60695 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> @@ -485,6 +485,9 @@ struct amdgpu_cs_parser {
>>> uint64_tbytes_moved;
>>> uint64_tbytes_moved_vis;
>>>
>>> +   /* secure cs */
>>> +   boolis_secure;
>>> +
>>> /* user fence */
>>> struct amdgpu_bo_list_entry uf_entry;
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> index 51f3db0..9038dc1 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> @@ -133,6 +133,14 @@ static int amdgpu_cs_parser_init(struct
>> amdgpu_cs_parser *p, union drm_amdgpu_cs
>>> goto free_chunk;
>>> }
>>>
>>> +   /**

NAK to double-stars flag-pole top here--this is NOT a function comment.
Since you do have an empty line immediately BEFORE this comment,
then you do not need the flag-pole top "/*" to add yet another
semi-empty line. Just start your comment normally:

/* The command submission ...
 *
...
 */
p->is_secure = ...

Regards,
Luben

>>> +* The command submission (cs) is a union, so an assignment to
>>> +* 'out' is destructive to the cs (at least the first 8
>>> +* bytes). For this reason, inquire about the flags before the
>>> +* assignment to 'out'.
>>> +*/
>>> +   p->is_secure = cs->in.flags & AMDGPU_CS_FLAGS_SECURE;
>>> +
>>> /* get chunks */
>>> chunk_array_user = u64_to_user_ptr(cs->in.chunks);
>>> if (copy_from_user(chunk_array, chunk_array_user, @@ -1252,8
>>> +1260,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>>> p->ctx->preamble_presented = true;
>>> }
>>>
>>> -   cs->out.handle = seq;
>>> +   job->secure = p->is_secure;
>>> job->uf_sequence = seq;
>>> +   cs->out.handle = seq;
>>
>> At least it is no longer accessing cs->in, but that still looks like the 
>> wrong place
>> to initialize the job.
>>
>> Why can't we fill that in directly after amdgpu_job_alloc() ?
> 
> There is not input member that is secure related in amdgpu_job_alloc() except 
> add an one:
>  
> amdgpu_job_alloc(adev, num_ibs, job, vm, secure)
> 
> It looks too much, isn't it?
> 
> Thanks,
> Ray
> 
>>
>> Regards,
>> Christian.
>>
>>>
>>> amdgpu_job_free_resources(job);
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>> index e1dc229..cb9b650 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>> @@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring,
>> unsigned num_ibs,
>>> if (job && ring->funcs->emit_cntxcntl) {
>>> status |= job->preamble_status;
>>> status |= job->preemption_status;
>>> -   amdgpu_ring_emit_cntxcntl(ring, status, false);
>>> +   amdgpu_ring_emit_cntxcntl(ring, status, job->secure);
>>> }
>>>
>>> for (i = 0; i < num_ibs; ++i) {
>>> @@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring,
>> unsigned num_ibs,
>>> }
>>>
>>> if (ring->funcs->emit_tmz)
>>> -   amdgpu_ring_emit_tmz(ring, false, false);
>>> +   amdgpu_ring_emit_tmz(ring, false, job ? job->secure : false);
>>>
>>>   #ifdef CONFIG_X86_64
>>> if (!(adev->flags & AMD_IS_APU))
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> index dc7ee93..aa0e375 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> @@ -63,6 +63,8 @@ struct amdgpu_job {
>>> uint64_tuf_addr;
>>> uint64_tuf_sequence;
>>>
>>> +   /* the job is due to a

RE: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

2019-09-25 Thread Huang, Ray
> -Original Message-
> From: amd-gfx  On Behalf Of Liu,
> Shaoyun
> Sent: Wednesday, September 25, 2019 6:14 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Liu, Shaoyun 
> Subject: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side
> 
> Add device info for both navi12 PF and VF
> 
> Change-Id: Ifb4035e65c12d153fc30e593fe109f9c7e0541f4
> Signed-off-by: shaoyunl 

Reviewed-by: Huang Rui 

> ---
>  drivers/gpu/drm/amd/amdkfd/kfd_device.c | 19 +++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> index f329b82..edfbae5c 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> @@ -387,6 +387,24 @@ static const struct kfd_device_info
> navi10_device_info = {
>   .num_sdma_queues_per_engine = 8,
>  };
> 
> +static const struct kfd_device_info navi12_device_info = {
> + .asic_family = CHIP_NAVI12,
> + .asic_name = "navi12",
> + .max_pasid_bits = 16,
> + .max_no_of_hqd  = 24,
> + .doorbell_size  = 8,
> + .ih_ring_entry_size = 8 * sizeof(uint32_t),
> + .event_interrupt_class = &event_interrupt_class_v9,
> + .num_of_watch_points = 4,
> + .mqd_size_aligned = MQD_SIZE_ALIGNED,
> + .needs_iommu_device = false,
> + .supports_cwsr = true,
> + .needs_pci_atomics = false,
> + .num_sdma_engines = 2,
> + .num_xgmi_sdma_engines = 0,
> + .num_sdma_queues_per_engine = 8,
> +};
> +
>  static const struct kfd_device_info navi14_device_info = {
>   .asic_family = CHIP_NAVI14,
>   .asic_name = "navi14",
> @@ -425,6 +443,7 @@ static const struct kfd_device_info
> *kfd_supported_devices[][2] = {
>   [CHIP_RENOIR] = {&renoir_device_info, NULL},
>   [CHIP_ARCTURUS] = {&arcturus_device_info,
> &arcturus_device_info},
>   [CHIP_NAVI10] = {&navi10_device_info, NULL},
> + [CHIP_NAVI12] = {&navi12_device_info, &navi12_device_info},
>   [CHIP_NAVI14] = {&navi14_device_info, NULL},  };
> 
> --
> 2.7.4
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: prevent memory leak

2019-09-25 Thread Alex Deucher
On Wed, Sep 25, 2019 at 9:54 AM Harry Wentland  wrote:
>
>
>
> On 2019-09-25 12:23 a.m., Navid Emamdoost wrote:
> > In dcn*_create_resource_pool the allocated memory should be released if
> > construct pool fails.
> >
> > Signed-off-by: Navid Emamdoost 
>
> Reviewed-by: Harry Wentland 
>

Applied.  thanks!

Alex

> Harry
>
> > ---
> >  drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c | 1 +
> >  drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c | 1 +
> >  drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c | 1 +
> >  drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c | 1 +
> >  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c   | 1 +
> >  5 files changed, 5 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c 
> > b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> > index afc61055eca1..1787b9bf800a 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> > @@ -1091,6 +1091,7 @@ struct resource_pool *dce100_create_resource_pool(
> >   if (construct(num_virtual_links, dc, pool))
> >   return &pool->base;
> >
> > + kfree(pool);
> >   BREAK_TO_DEBUGGER();
> >   return NULL;
> >  }
> > diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c 
> > b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> > index c66fe170e1e8..318e9c2e2ca8 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> > @@ -1462,6 +1462,7 @@ struct resource_pool *dce110_create_resource_pool(
> >   if (construct(num_virtual_links, dc, pool, asic_id))
> >   return &pool->base;
> >
> > + kfree(pool);
> >   BREAK_TO_DEBUGGER();
> >   return NULL;
> >  }
> > diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c 
> > b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> > index 3ac4c7e73050..3199d493d13b 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> > @@ -1338,6 +1338,7 @@ struct resource_pool *dce112_create_resource_pool(
> >   if (construct(num_virtual_links, dc, pool))
> >   return &pool->base;
> >
> > + kfree(pool);
> >   BREAK_TO_DEBUGGER();
> >   return NULL;
> >  }
> > diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c 
> > b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> > index 7d08154e9662..bb497f43f6eb 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> > @@ -1203,6 +1203,7 @@ struct resource_pool *dce120_create_resource_pool(
> >   if (construct(num_virtual_links, dc, pool))
> >   return &pool->base;
> >
> > + kfree(pool);
> >   BREAK_TO_DEBUGGER();
> >   return NULL;
> >  }
> > diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c 
> > b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
> > index 5a89e462e7cc..59305e411a66 100644
> > --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
> > +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
> > @@ -1570,6 +1570,7 @@ struct resource_pool *dcn10_create_resource_pool(
> >   if (construct(init_data->num_virtual_links, dc, pool))
> >   return &pool->base;
> >
> > + kfree(pool);
> >   BREAK_TO_DEBUGGER();
> >   return NULL;
> >  }
> >
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 11/11] drm/amdgpu: set TMZ bits in PTEs for secure BO (v4)

2019-09-25 Thread Alex Deucher
On Wed, Sep 25, 2019 at 9:59 AM Koenig, Christian
 wrote:
>
> Am 25.09.19 um 15:45 schrieb Huang, Ray:
> > From: Alex Deucher 
> >
> > If a buffer object is secure, i.e. created with
> > AMDGPU_GEM_CREATE_ENCRYPTED, then the TMZ bit of
> > the PTEs that belong the buffer object should be
> > set.
> >
> > v1: design and draft the skeletion of TMZ bits setting on PTEs (Alex)
> > v2: return failure once create secure BO on non-TMZ platform  (Ray)
> > v3: amdgpu_bo_encrypted() only checks the BO (Luben)
> > v4: move TMZ flag setting into amdgpu_vm_bo_update  (Christian)
> >
> > Signed-off-by: Alex Deucher 
> > Reviewed-by: Huang Rui 
> > Signed-off-by: Huang Rui 
> > Signed-off-by: Luben Tuikov 
> > Reviewed-by: Alex Deucher 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 12 +++-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 11 +++
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  5 +
> >   3 files changed, 27 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > index 22eab74..5332104 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> > @@ -222,7 +222,8 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, 
> > void *data,
> > AMDGPU_GEM_CREATE_CPU_GTT_USWC |
> > AMDGPU_GEM_CREATE_VRAM_CLEARED |
> > AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
> > -   AMDGPU_GEM_CREATE_EXPLICIT_SYNC))
> > +   AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
> > +   AMDGPU_GEM_CREATE_ENCRYPTED))
> >
> >   return -EINVAL;
> >
> > @@ -230,6 +231,11 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, 
> > void *data,
> >   if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
> >   return -EINVAL;
> >
> > + if (!adev->tmz.enabled && (flags & AMDGPU_GEM_CREATE_ENCRYPTED)) {
> > + DRM_ERROR("Cannot allocate secure buffer while tmz is 
> > disabled\n");
> > + return -EINVAL;
> > + }
> > +
> >   /* create a gem object to contain this object in */
> >   if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
> >   AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
> > @@ -251,6 +257,10 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, 
> > void *data,
> >   resv = vm->root.base.bo->tbo.resv;
> >   }
> >
> > + if (flags & AMDGPU_GEM_CREATE_ENCRYPTED) {
> > + /* XXX: pad out alignment to meet TMZ requirements */
> > + }
> > +
> >   r = amdgpu_gem_object_create(adev, size, args->in.alignment,
> >(u32)(0x & args->in.domains),
> >flags, ttm_bo_type_device, resv, &gobj);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> > index 5a3c177..75c7392 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> > @@ -224,6 +224,17 @@ static inline bool amdgpu_bo_explicit_sync(struct 
> > amdgpu_bo *bo)
> >   return bo->flags & AMDGPU_GEM_CREATE_EXPLICIT_SYNC;
> >   }
> >
> > +/**
> > + * amdgpu_bo_encrypted - test if the BO is encrypted
> > + * @bo: pointer to a buffer object
> > + *
> > + * Return true if the buffer object is encrypted, false otherwise.
> > + */
> > +static inline bool amdgpu_bo_encrypted(struct amdgpu_bo *bo)
> > +{
> > + return bo->flags & AMDGPU_GEM_CREATE_ENCRYPTED;
> > +}
> > +
> >   bool amdgpu_bo_is_amdgpu_bo(struct ttm_buffer_object *bo);
> >   void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain);
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index b285ab2..8e13b1fd3 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -1688,6 +1688,11 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
> >
> >   if (bo) {
> >   flags = amdgpu_ttm_tt_pte_flags(adev, bo->tbo.ttm, mem);
> > +
> > + if (amdgpu_bo_encrypted(bo)) {
> > + flags |= AMDGPU_PTE_TMZ;
> > + }
> > +
>
> You can drop the {} here, apart from that the patch is Reviewed-by:
> Christian König .
>
>  From the design it would be indeed nicer to have that in
> amdgpu_ttm_tt_pte_flags(), but we would need to make sure that this is
> not called any more from binding a tt.

Don't we need that for things like evictions of tmz buffers?  That
needs special handling in general, which I guess we can address at
that time.

Alex

>
> Regards,
> Christian.
>
> >   bo_adev = amdgpu_ttm_adev(bo->tbo.bdev);
> >   } else {
> >   flags = 0x0;
>
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listi

Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

2019-09-25 Thread Kuehling, Felix
You'll also need to add "case CHIP_NAVI12:" in a bunch of places. Grep 
for "CHIP_NAVI10" and you'll find them all pretty quickly.

Regards,
   Felix

On 2019-09-24 6:13 p.m., Liu, Shaoyun wrote:
> Add device info for both navi12 PF and VF
>
> Change-Id: Ifb4035e65c12d153fc30e593fe109f9c7e0541f4
> Signed-off-by: shaoyunl 
> ---
>   drivers/gpu/drm/amd/amdkfd/kfd_device.c | 19 +++
>   1 file changed, 19 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c 
> b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> index f329b82..edfbae5c 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
> @@ -387,6 +387,24 @@ static const struct kfd_device_info navi10_device_info = 
> {
>   .num_sdma_queues_per_engine = 8,
>   };
>   
> +static const struct kfd_device_info navi12_device_info = {
> + .asic_family = CHIP_NAVI12,
> + .asic_name = "navi12",
> + .max_pasid_bits = 16,
> + .max_no_of_hqd  = 24,
> + .doorbell_size  = 8,
> + .ih_ring_entry_size = 8 * sizeof(uint32_t),
> + .event_interrupt_class = &event_interrupt_class_v9,
> + .num_of_watch_points = 4,
> + .mqd_size_aligned = MQD_SIZE_ALIGNED,
> + .needs_iommu_device = false,
> + .supports_cwsr = true,
> + .needs_pci_atomics = false,
> + .num_sdma_engines = 2,
> + .num_xgmi_sdma_engines = 0,
> + .num_sdma_queues_per_engine = 8,
> +};
> +
>   static const struct kfd_device_info navi14_device_info = {
>   .asic_family = CHIP_NAVI14,
>   .asic_name = "navi14",
> @@ -425,6 +443,7 @@ static const struct kfd_device_info 
> *kfd_supported_devices[][2] = {
>   [CHIP_RENOIR] = {&renoir_device_info, NULL},
>   [CHIP_ARCTURUS] = {&arcturus_device_info, &arcturus_device_info},
>   [CHIP_NAVI10] = {&navi10_device_info, NULL},
> + [CHIP_NAVI12] = {&navi12_device_info, &navi12_device_info},
>   [CHIP_NAVI14] = {&navi14_device_info, NULL},
>   };
>   
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: libdrm patch merge request

2019-09-25 Thread Michel Dänzer
On 2019-09-23 10:29 a.m., Chen, Guchun wrote:
> Hi Michel,
> 
> Can you help illustrate more about using MRs to proceed libdrm changes? We 
> can use gitlab to merge the change from our local forked repository to drm 
> master repository?

Yes. Anybody who has write access to master can merge an MR at the click
of a button, and the MR page contains all relevant information, in
particular the CI pipeline status.


-- 
Earthling Michel Dänzer   |   https://redhat.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH v3 10/11] drm/amdgpu: job is secure iff CS is secure (v4)

2019-09-25 Thread Huang, Ray
> -Original Message-
> From: Koenig, Christian 
> Sent: Wednesday, September 25, 2019 10:47 PM
> To: Huang, Ray ; amd-gfx@lists.freedesktop.org; dri-
> de...@lists.freedesktop.org; Deucher, Alexander
> 
> Cc: Tuikov, Luben ; Liu, Aaron
> 
> Subject: Re: [PATCH v3 10/11] drm/amdgpu: job is secure iff CS is secure (v4)
> 
> Am 25.09.19 um 16:38 schrieb Huang, Ray:
> > Mark a job as secure, if and only if the command submission flag has
> > the secure flag set.
> >
> > v2: fix the null job pointer while in vmid 0 submission.
> > v3: Context --> Command submission.
> > v4: filling cs parser with cs->in.flags
> >
> > Signed-off-by: Huang Rui 
> > Co-developed-by: Luben Tuikov 
> > Signed-off-by: Luben Tuikov 
> > Reviewed-by: Alex Deucher 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu.h |  3 +++
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 11 ++-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  |  4 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
> >   4 files changed, 17 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > index 697e8e5..fd60695 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > @@ -485,6 +485,9 @@ struct amdgpu_cs_parser {
> > uint64_tbytes_moved;
> > uint64_tbytes_moved_vis;
> >
> > +   /* secure cs */
> > +   boolis_secure;
> > +
> > /* user fence */
> > struct amdgpu_bo_list_entry uf_entry;
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index 51f3db0..9038dc1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -133,6 +133,14 @@ static int amdgpu_cs_parser_init(struct
> amdgpu_cs_parser *p, union drm_amdgpu_cs
> > goto free_chunk;
> > }
> >
> > +   /**
> > +* The command submission (cs) is a union, so an assignment to
> > +* 'out' is destructive to the cs (at least the first 8
> > +* bytes). For this reason, inquire about the flags before the
> > +* assignment to 'out'.
> > +*/
> > +   p->is_secure = cs->in.flags & AMDGPU_CS_FLAGS_SECURE;
> > +
> > /* get chunks */
> > chunk_array_user = u64_to_user_ptr(cs->in.chunks);
> > if (copy_from_user(chunk_array, chunk_array_user, @@ -1252,8
> > +1260,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> > p->ctx->preamble_presented = true;
> > }
> >
> > -   cs->out.handle = seq;
> > +   job->secure = p->is_secure;
> > job->uf_sequence = seq;
> > +   cs->out.handle = seq;
> 
> At least it is no longer accessing cs->in, but that still looks like the 
> wrong place
> to initialize the job.
> 
> Why can't we fill that in directly after amdgpu_job_alloc() ?

There is not input member that is secure related in amdgpu_job_alloc() except 
add an one:
 
amdgpu_job_alloc(adev, num_ibs, job, vm, secure)

It looks too much, isn't it?

Thanks,
Ray

> 
> Regards,
> Christian.
> 
> >
> > amdgpu_job_free_resources(job);
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > index e1dc229..cb9b650 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > @@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring,
> unsigned num_ibs,
> > if (job && ring->funcs->emit_cntxcntl) {
> > status |= job->preamble_status;
> > status |= job->preemption_status;
> > -   amdgpu_ring_emit_cntxcntl(ring, status, false);
> > +   amdgpu_ring_emit_cntxcntl(ring, status, job->secure);
> > }
> >
> > for (i = 0; i < num_ibs; ++i) {
> > @@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring,
> unsigned num_ibs,
> > }
> >
> > if (ring->funcs->emit_tmz)
> > -   amdgpu_ring_emit_tmz(ring, false, false);
> > +   amdgpu_ring_emit_tmz(ring, false, job ? job->secure : false);
> >
> >   #ifdef CONFIG_X86_64
> > if (!(adev->flags & AMD_IS_APU))
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > index dc7ee93..aa0e375 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > @@ -63,6 +63,8 @@ struct amdgpu_job {
> > uint64_tuf_addr;
> > uint64_tuf_sequence;
> >
> > +   /* the job is due to a secure command submission */
> > +   boolsecure;
> >   };
> >
> >   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v3 10/11] drm/amdgpu: job is secure iff CS is secure (v4)

2019-09-25 Thread Koenig, Christian
Am 25.09.19 um 16:38 schrieb Huang, Ray:
> Mark a job as secure, if and only if the command
> submission flag has the secure flag set.
>
> v2: fix the null job pointer while in vmid 0
> submission.
> v3: Context --> Command submission.
> v4: filling cs parser with cs->in.flags
>
> Signed-off-by: Huang Rui 
> Co-developed-by: Luben Tuikov 
> Signed-off-by: Luben Tuikov 
> Reviewed-by: Alex Deucher 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h |  3 +++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 11 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  |  4 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
>   4 files changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 697e8e5..fd60695 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -485,6 +485,9 @@ struct amdgpu_cs_parser {
>   uint64_tbytes_moved;
>   uint64_tbytes_moved_vis;
>   
> + /* secure cs */
> + boolis_secure;
> +
>   /* user fence */
>   struct amdgpu_bo_list_entry uf_entry;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 51f3db0..9038dc1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -133,6 +133,14 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
> *p, union drm_amdgpu_cs
>   goto free_chunk;
>   }
>   
> + /**
> +  * The command submission (cs) is a union, so an assignment to
> +  * 'out' is destructive to the cs (at least the first 8
> +  * bytes). For this reason, inquire about the flags before the
> +  * assignment to 'out'.
> +  */
> + p->is_secure = cs->in.flags & AMDGPU_CS_FLAGS_SECURE;
> +
>   /* get chunks */
>   chunk_array_user = u64_to_user_ptr(cs->in.chunks);
>   if (copy_from_user(chunk_array, chunk_array_user,
> @@ -1252,8 +1260,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   p->ctx->preamble_presented = true;
>   }
>   
> - cs->out.handle = seq;
> + job->secure = p->is_secure;
>   job->uf_sequence = seq;
> + cs->out.handle = seq;

At least it is no longer accessing cs->in, but that still looks like the 
wrong place to initialize the job.

Why can't we fill that in directly after amdgpu_job_alloc() ?

Regards,
Christian.

>   
>   amdgpu_job_free_resources(job);
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index e1dc229..cb9b650 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
> num_ibs,
>   if (job && ring->funcs->emit_cntxcntl) {
>   status |= job->preamble_status;
>   status |= job->preemption_status;
> - amdgpu_ring_emit_cntxcntl(ring, status, false);
> + amdgpu_ring_emit_cntxcntl(ring, status, job->secure);
>   }
>   
>   for (i = 0; i < num_ibs; ++i) {
> @@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
> num_ibs,
>   }
>   
>   if (ring->funcs->emit_tmz)
> - amdgpu_ring_emit_tmz(ring, false, false);
> + amdgpu_ring_emit_tmz(ring, false, job ? job->secure : false);
>   
>   #ifdef CONFIG_X86_64
>   if (!(adev->flags & AMD_IS_APU))
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index dc7ee93..aa0e375 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -63,6 +63,8 @@ struct amdgpu_job {
>   uint64_tuf_addr;
>   uint64_tuf_sequence;
>   
> + /* the job is due to a secure command submission */
> + boolsecure;
>   };
>   
>   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amdgpu: fix error handling in amdgpu_bo_list_create

2019-09-25 Thread Huang, Ray
> -Original Message-
> From: amd-gfx  On Behalf Of
> Christian K?nig
> Sent: Thursday, September 19, 2019 1:43 AM
> To: amd-gfx@lists.freedesktop.org
> Subject: [PATCH] drm/amdgpu: fix error handling in amdgpu_bo_list_create
> 
> We need to drop normal and userptr BOs separately.
> 
> Signed-off-by: Christian König 

Acked-by: Huang Rui 

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> index d497467b7fc6..94908bf269a6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> @@ -139,7 +139,12 @@ int amdgpu_bo_list_create(struct amdgpu_device
> *adev, struct drm_file *filp,
>   return 0;
> 
>  error_free:
> - while (i--) {
> + for (i = 0; i < last_entry; ++i) {
> + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
> +
> + amdgpu_bo_unref(&bo);
> + }
> + for (i = first_userptr; i < num_entries; ++i) {
>   struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
> 
>   amdgpu_bo_unref(&bo);
> --
> 2.17.1
> 
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v3 10/11] drm/amdgpu: job is secure iff CS is secure (v4)

2019-09-25 Thread Huang, Ray
Mark a job as secure, if and only if the command
submission flag has the secure flag set.

v2: fix the null job pointer while in vmid 0
submission.
v3: Context --> Command submission.
v4: filling cs parser with cs->in.flags

Signed-off-by: Huang Rui 
Co-developed-by: Luben Tuikov 
Signed-off-by: Luben Tuikov 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h |  3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 11 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  |  4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  2 ++
 4 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 697e8e5..fd60695 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -485,6 +485,9 @@ struct amdgpu_cs_parser {
uint64_tbytes_moved;
uint64_tbytes_moved_vis;
 
+   /* secure cs */
+   boolis_secure;
+
/* user fence */
struct amdgpu_bo_list_entry uf_entry;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 51f3db0..9038dc1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -133,6 +133,14 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser 
*p, union drm_amdgpu_cs
goto free_chunk;
}
 
+   /**
+* The command submission (cs) is a union, so an assignment to
+* 'out' is destructive to the cs (at least the first 8
+* bytes). For this reason, inquire about the flags before the
+* assignment to 'out'.
+*/
+   p->is_secure = cs->in.flags & AMDGPU_CS_FLAGS_SECURE;
+
/* get chunks */
chunk_array_user = u64_to_user_ptr(cs->in.chunks);
if (copy_from_user(chunk_array, chunk_array_user,
@@ -1252,8 +1260,9 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
p->ctx->preamble_presented = true;
}
 
-   cs->out.handle = seq;
+   job->secure = p->is_secure;
job->uf_sequence = seq;
+   cs->out.handle = seq;
 
amdgpu_job_free_resources(job);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index e1dc229..cb9b650 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
if (job && ring->funcs->emit_cntxcntl) {
status |= job->preamble_status;
status |= job->preemption_status;
-   amdgpu_ring_emit_cntxcntl(ring, status, false);
+   amdgpu_ring_emit_cntxcntl(ring, status, job->secure);
}
 
for (i = 0; i < num_ibs; ++i) {
@@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
}
 
if (ring->funcs->emit_tmz)
-   amdgpu_ring_emit_tmz(ring, false, false);
+   amdgpu_ring_emit_tmz(ring, false, job ? job->secure : false);
 
 #ifdef CONFIG_X86_64
if (!(adev->flags & AMD_IS_APU))
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index dc7ee93..aa0e375 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -63,6 +63,8 @@ struct amdgpu_job {
uint64_tuf_addr;
uint64_tuf_sequence;
 
+   /* the job is due to a secure command submission */
+   boolsecure;
 };
 
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: fix potential VM faults

2019-09-25 Thread Christian König

Hi Monk,

this patch doesn't prevents PD/PT eviction.

The intention of the code here was that per VM BOs can evict other per 
VM BOs during allocation.


The problem my patch fixes is that this unfortunately also meant that 
allocation PDs/PTs could evict other PDs/PTs from the same process.


Regards,
Christian.

Am 25.09.19 um 15:51 schrieb Liu, Monk:

Hi Christian

Theoretically the vm pt/pd should be allowed to be evicted like other BOs ..

If you encountered page fault and could be avoided by this patch, that means 
there is bug in the VM/ttm system , and your patch simply

w/a the root cause.

_
Monk Liu|GPU Virtualization Team |AMD


-Original Message-
From: amd-gfx  On Behalf Of Christian 
K?nig
Sent: Thursday, September 19, 2019 4:42 PM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] drm/amdgpu: fix potential VM faults

When we allocate new page tables under memory pressure we should not evict old 
ones.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 70d45d48907a..8e44ecaada35 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -514,7 +514,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
.interruptible = (bp->type != ttm_bo_type_kernel),
.no_wait_gpu = bp->no_wait_gpu,
.resv = bp->resv,
-   .flags = TTM_OPT_FLAG_ALLOW_RES_EVICT
+   .flags = bp->type != ttm_bo_type_kernel ?
+   TTM_OPT_FLAG_ALLOW_RES_EVICT : 0
};
struct amdgpu_bo *bo;
unsigned long page_align, size = bp->size;
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 11/11] drm/amdgpu: set TMZ bits in PTEs for secure BO (v4)

2019-09-25 Thread Koenig, Christian
Am 25.09.19 um 15:45 schrieb Huang, Ray:
> From: Alex Deucher 
>
> If a buffer object is secure, i.e. created with
> AMDGPU_GEM_CREATE_ENCRYPTED, then the TMZ bit of
> the PTEs that belong the buffer object should be
> set.
>
> v1: design and draft the skeletion of TMZ bits setting on PTEs (Alex)
> v2: return failure once create secure BO on non-TMZ platform  (Ray)
> v3: amdgpu_bo_encrypted() only checks the BO (Luben)
> v4: move TMZ flag setting into amdgpu_vm_bo_update  (Christian)
>
> Signed-off-by: Alex Deucher 
> Reviewed-by: Huang Rui 
> Signed-off-by: Huang Rui 
> Signed-off-by: Luben Tuikov 
> Reviewed-by: Alex Deucher 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 12 +++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 11 +++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  5 +
>   3 files changed, 27 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 22eab74..5332104 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -222,7 +222,8 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
> AMDGPU_GEM_CREATE_CPU_GTT_USWC |
> AMDGPU_GEM_CREATE_VRAM_CLEARED |
> AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
> -   AMDGPU_GEM_CREATE_EXPLICIT_SYNC))
> +   AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
> +   AMDGPU_GEM_CREATE_ENCRYPTED))
>   
>   return -EINVAL;
>   
> @@ -230,6 +231,11 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>   if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
>   return -EINVAL;
>   
> + if (!adev->tmz.enabled && (flags & AMDGPU_GEM_CREATE_ENCRYPTED)) {
> + DRM_ERROR("Cannot allocate secure buffer while tmz is 
> disabled\n");
> + return -EINVAL;
> + }
> +
>   /* create a gem object to contain this object in */
>   if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
>   AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
> @@ -251,6 +257,10 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
> *data,
>   resv = vm->root.base.bo->tbo.resv;
>   }
>   
> + if (flags & AMDGPU_GEM_CREATE_ENCRYPTED) {
> + /* XXX: pad out alignment to meet TMZ requirements */
> + }
> +
>   r = amdgpu_gem_object_create(adev, size, args->in.alignment,
>(u32)(0x & args->in.domains),
>flags, ttm_bo_type_device, resv, &gobj);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 5a3c177..75c7392 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -224,6 +224,17 @@ static inline bool amdgpu_bo_explicit_sync(struct 
> amdgpu_bo *bo)
>   return bo->flags & AMDGPU_GEM_CREATE_EXPLICIT_SYNC;
>   }
>   
> +/**
> + * amdgpu_bo_encrypted - test if the BO is encrypted
> + * @bo: pointer to a buffer object
> + *
> + * Return true if the buffer object is encrypted, false otherwise.
> + */
> +static inline bool amdgpu_bo_encrypted(struct amdgpu_bo *bo)
> +{
> + return bo->flags & AMDGPU_GEM_CREATE_ENCRYPTED;
> +}
> +
>   bool amdgpu_bo_is_amdgpu_bo(struct ttm_buffer_object *bo);
>   void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain);
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index b285ab2..8e13b1fd3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -1688,6 +1688,11 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
>   
>   if (bo) {
>   flags = amdgpu_ttm_tt_pte_flags(adev, bo->tbo.ttm, mem);
> +
> + if (amdgpu_bo_encrypted(bo)) {
> + flags |= AMDGPU_PTE_TMZ;
> + }
> +

You can drop the {} here, apart from that the patch is Reviewed-by: 
Christian König .

 From the design it would be indeed nicer to have that in 
amdgpu_ttm_tt_pte_flags(), but we would need to make sure that this is 
not called any more from binding a tt.

Regards,
Christian.

>   bo_adev = amdgpu_ttm_adev(bo->tbo.bdev);
>   } else {
>   flags = 0x0;

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH v2 10/11] drm/amdgpu: job is secure iff CS is secure (v3)

2019-09-25 Thread Koenig, Christian
Am 25.09.19 um 15:45 schrieb Huang, Ray:
> Mark a job as secure, if and only if the command
> submission flag has the secure flag set.
>
> v2: fix the null job pointer while in vmid 0
> submission.
> v3: Context --> Command submission.
>
> Signed-off-by: Huang Rui 
> Co-developed-by: Luben Tuikov 
> Signed-off-by: Luben Tuikov 
> Reviewed-by: Alex Deucher 
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 8 +++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  | 4 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 2 ++
>   3 files changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 51f3db0..0077bb3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1252,8 +1252,14 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>   p->ctx->preamble_presented = true;
>   }
>   
> - cs->out.handle = seq;
> + /* The command submission (cs) is a union, so an assignment to
> +  * 'out' is destructive to the cs (at least the first 8
> +  * bytes). For this reason, inquire about the flags before the
> +  * assignment to 'out'.
> +  */
> + job->secure = cs->in.flags & AMDGPU_CS_FLAGS_SECURE;

NAK accessing cs->in.flags in the submission function is a no-go here.

You need to fill those things up during job creation, see 
amdgpu_cs_parser_init().

Regards,
Christian.

>   job->uf_sequence = seq;
> + cs->out.handle = seq;
>   
>   amdgpu_job_free_resources(job);
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index e1dc229..cb9b650 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
> num_ibs,
>   if (job && ring->funcs->emit_cntxcntl) {
>   status |= job->preamble_status;
>   status |= job->preemption_status;
> - amdgpu_ring_emit_cntxcntl(ring, status, false);
> + amdgpu_ring_emit_cntxcntl(ring, status, job->secure);
>   }
>   
>   for (i = 0; i < num_ibs; ++i) {
> @@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
> num_ibs,
>   }
>   
>   if (ring->funcs->emit_tmz)
> - amdgpu_ring_emit_tmz(ring, false, false);
> + amdgpu_ring_emit_tmz(ring, false, job ? job->secure : false);
>   
>   #ifdef CONFIG_X86_64
>   if (!(adev->flags & AMD_IS_APU))
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index dc7ee93..aa0e375 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -63,6 +63,8 @@ struct amdgpu_job {
>   uint64_tuf_addr;
>   uint64_tuf_sequence;
>   
> + /* the job is due to a secure command submission */
> + boolsecure;
>   };
>   
>   int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amd/display: prevent memory leak

2019-09-25 Thread Harry Wentland


On 2019-09-25 12:23 a.m., Navid Emamdoost wrote:
> In dcn*_create_resource_pool the allocated memory should be released if
> construct pool fails.
> 
> Signed-off-by: Navid Emamdoost 

Reviewed-by: Harry Wentland 

Harry

> ---
>  drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c | 1 +
>  drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c | 1 +
>  drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c | 1 +
>  drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c | 1 +
>  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c   | 1 +
>  5 files changed, 5 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c 
> b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> index afc61055eca1..1787b9bf800a 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> @@ -1091,6 +1091,7 @@ struct resource_pool *dce100_create_resource_pool(
>   if (construct(num_virtual_links, dc, pool))
>   return &pool->base;
>  
> + kfree(pool);
>   BREAK_TO_DEBUGGER();
>   return NULL;
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c 
> b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> index c66fe170e1e8..318e9c2e2ca8 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> @@ -1462,6 +1462,7 @@ struct resource_pool *dce110_create_resource_pool(
>   if (construct(num_virtual_links, dc, pool, asic_id))
>   return &pool->base;
>  
> + kfree(pool);
>   BREAK_TO_DEBUGGER();
>   return NULL;
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c 
> b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> index 3ac4c7e73050..3199d493d13b 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> @@ -1338,6 +1338,7 @@ struct resource_pool *dce112_create_resource_pool(
>   if (construct(num_virtual_links, dc, pool))
>   return &pool->base;
>  
> + kfree(pool);
>   BREAK_TO_DEBUGGER();
>   return NULL;
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c 
> b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> index 7d08154e9662..bb497f43f6eb 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> @@ -1203,6 +1203,7 @@ struct resource_pool *dce120_create_resource_pool(
>   if (construct(num_virtual_links, dc, pool))
>   return &pool->base;
>  
> + kfree(pool);
>   BREAK_TO_DEBUGGER();
>   return NULL;
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c 
> b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
> index 5a89e462e7cc..59305e411a66 100644
> --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
> @@ -1570,6 +1570,7 @@ struct resource_pool *dcn10_create_resource_pool(
>   if (construct(init_data->num_virtual_links, dc, pool))
>   return &pool->base;
>  
> + kfree(pool);
>   BREAK_TO_DEBUGGER();
>   return NULL;
>  }
> 


RE: [PATCH] drm/amdgpu: fix potential VM faults

2019-09-25 Thread Liu, Monk
Hi Christian

Theoretically the vm pt/pd should be allowed to be evicted like other BOs ..

If you encountered page fault and could be avoided by this patch, that means 
there is bug in the VM/ttm system , and your patch simply

w/a the root cause.

_
Monk Liu|GPU Virtualization Team |AMD


-Original Message-
From: amd-gfx  On Behalf Of Christian 
K?nig
Sent: Thursday, September 19, 2019 4:42 PM
To: amd-gfx@lists.freedesktop.org
Subject: [PATCH] drm/amdgpu: fix potential VM faults

When we allocate new page tables under memory pressure we should not evict old 
ones.

Signed-off-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 70d45d48907a..8e44ecaada35 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -514,7 +514,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
.interruptible = (bp->type != ttm_bo_type_kernel),
.no_wait_gpu = bp->no_wait_gpu,
.resv = bp->resv,
-   .flags = TTM_OPT_FLAG_ALLOW_RES_EVICT
+   .flags = bp->type != ttm_bo_type_kernel ?
+   TTM_OPT_FLAG_ALLOW_RES_EVICT : 0
};
struct amdgpu_bo *bo;
unsigned long page_align, size = bp->size;
--
2.17.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 09/11] drm/amdgpu: expand the context control interface with trust flag

2019-09-25 Thread Huang, Ray
This patch expands the context control function to support trusted flag while we
want to set command buffer in trusted mode.

Signed-off-by: Huang Rui 
Reviewed-by: Alex Deucher 
Acked-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c   | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 5 +++--
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 5 +++--
 7 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 54741ba..e1dc229 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
if (job && ring->funcs->emit_cntxcntl) {
status |= job->preamble_status;
status |= job->preemption_status;
-   amdgpu_ring_emit_cntxcntl(ring, status);
+   amdgpu_ring_emit_cntxcntl(ring, status, false);
}
 
for (i = 0; i < num_ibs; ++i) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 34aa63a..5134d0d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -158,7 +158,8 @@ struct amdgpu_ring_funcs {
void (*begin_use)(struct amdgpu_ring *ring);
void (*end_use)(struct amdgpu_ring *ring);
void (*emit_switch_buffer) (struct amdgpu_ring *ring);
-   void (*emit_cntxcntl) (struct amdgpu_ring *ring, uint32_t flags);
+   void (*emit_cntxcntl) (struct amdgpu_ring *ring, uint32_t flags,
+  bool trusted);
void (*emit_rreg)(struct amdgpu_ring *ring, uint32_t reg);
void (*emit_wreg)(struct amdgpu_ring *ring, uint32_t reg, uint32_t val);
void (*emit_reg_wait)(struct amdgpu_ring *ring, uint32_t reg,
@@ -242,7 +243,7 @@ struct amdgpu_ring {
 #define amdgpu_ring_emit_gds_switch(r, v, db, ds, wb, ws, ab, as) 
(r)->funcs->emit_gds_switch((r), (v), (db), (ds), (wb), (ws), (ab), (as))
 #define amdgpu_ring_emit_hdp_flush(r) (r)->funcs->emit_hdp_flush((r))
 #define amdgpu_ring_emit_switch_buffer(r) (r)->funcs->emit_switch_buffer((r))
-#define amdgpu_ring_emit_cntxcntl(r, d) (r)->funcs->emit_cntxcntl((r), (d))
+#define amdgpu_ring_emit_cntxcntl(r, d, s) (r)->funcs->emit_cntxcntl((r), (d), 
(s))
 #define amdgpu_ring_emit_rreg(r, d) (r)->funcs->emit_rreg((r), (d))
 #define amdgpu_ring_emit_wreg(r, d, v) (r)->funcs->emit_wreg((r), (d), (v))
 #define amdgpu_ring_emit_reg_wait(r, d, v, m) (r)->funcs->emit_reg_wait((r), 
(d), (v), (m))
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 18f741b..06698c2 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -4514,7 +4514,9 @@ static void gfx_v10_0_ring_emit_sb(struct amdgpu_ring 
*ring)
amdgpu_ring_write(ring, 0);
 }
 
-static void gfx_v10_0_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t 
flags)
+static void gfx_v10_0_ring_emit_cntxcntl(struct amdgpu_ring *ring,
+uint32_t flags,
+bool trusted)
 {
uint32_t dw2 = 0;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
index 8c27c30..b4af1b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
@@ -2972,7 +2972,8 @@ static uint64_t gfx_v6_0_get_gpu_clock_counter(struct 
amdgpu_device *adev)
return clock;
 }
 
-static void gfx_v6_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
+static void gfx_v6_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags,
+ bool trusted)
 {
if (flags & AMDGPU_HAVE_CTX_SWITCH)
gfx_v6_0_ring_emit_vgt_flush(ring);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 48796b68..c08f5c5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -2309,7 +2309,8 @@ static void gfx_v7_0_ring_emit_ib_compute(struct 
amdgpu_ring *ring,
amdgpu_ring_write(ring, control);
 }
 
-static void gfx_v7_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
+static void gfx_v7_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags,
+ bool trusted)
 {
uint32_t dw2 = 0;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 98e5aa8..d3a23fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -6393,7 +6393,8 @@ static void gfx_v8_ring_emit_sb(struct amdgpu_ring *ring)
amd

[PATCH v2 11/11] drm/amdgpu: set TMZ bits in PTEs for secure BO (v4)

2019-09-25 Thread Huang, Ray
From: Alex Deucher 

If a buffer object is secure, i.e. created with
AMDGPU_GEM_CREATE_ENCRYPTED, then the TMZ bit of
the PTEs that belong the buffer object should be
set.

v1: design and draft the skeletion of TMZ bits setting on PTEs (Alex)
v2: return failure once create secure BO on non-TMZ platform  (Ray)
v3: amdgpu_bo_encrypted() only checks the BO (Luben)
v4: move TMZ flag setting into amdgpu_vm_bo_update  (Christian)

Signed-off-by: Alex Deucher 
Reviewed-by: Huang Rui 
Signed-off-by: Huang Rui 
Signed-off-by: Luben Tuikov 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 12 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 11 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  5 +
 3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 22eab74..5332104 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -222,7 +222,8 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
  AMDGPU_GEM_CREATE_CPU_GTT_USWC |
  AMDGPU_GEM_CREATE_VRAM_CLEARED |
  AMDGPU_GEM_CREATE_VM_ALWAYS_VALID |
- AMDGPU_GEM_CREATE_EXPLICIT_SYNC))
+ AMDGPU_GEM_CREATE_EXPLICIT_SYNC |
+ AMDGPU_GEM_CREATE_ENCRYPTED))
 
return -EINVAL;
 
@@ -230,6 +231,11 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
if (args->in.domains & ~AMDGPU_GEM_DOMAIN_MASK)
return -EINVAL;
 
+   if (!adev->tmz.enabled && (flags & AMDGPU_GEM_CREATE_ENCRYPTED)) {
+   DRM_ERROR("Cannot allocate secure buffer while tmz is 
disabled\n");
+   return -EINVAL;
+   }
+
/* create a gem object to contain this object in */
if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS |
AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
@@ -251,6 +257,10 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void 
*data,
resv = vm->root.base.bo->tbo.resv;
}
 
+   if (flags & AMDGPU_GEM_CREATE_ENCRYPTED) {
+   /* XXX: pad out alignment to meet TMZ requirements */
+   }
+
r = amdgpu_gem_object_create(adev, size, args->in.alignment,
 (u32)(0x & args->in.domains),
 flags, ttm_bo_type_device, resv, &gobj);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 5a3c177..75c7392 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -224,6 +224,17 @@ static inline bool amdgpu_bo_explicit_sync(struct 
amdgpu_bo *bo)
return bo->flags & AMDGPU_GEM_CREATE_EXPLICIT_SYNC;
 }
 
+/**
+ * amdgpu_bo_encrypted - test if the BO is encrypted
+ * @bo: pointer to a buffer object
+ *
+ * Return true if the buffer object is encrypted, false otherwise.
+ */
+static inline bool amdgpu_bo_encrypted(struct amdgpu_bo *bo)
+{
+   return bo->flags & AMDGPU_GEM_CREATE_ENCRYPTED;
+}
+
 bool amdgpu_bo_is_amdgpu_bo(struct ttm_buffer_object *bo);
 void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index b285ab2..8e13b1fd3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1688,6 +1688,11 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
 
if (bo) {
flags = amdgpu_ttm_tt_pte_flags(adev, bo->tbo.ttm, mem);
+
+   if (amdgpu_bo_encrypted(bo)) {
+   flags |= AMDGPU_PTE_TMZ;
+   }
+
bo_adev = amdgpu_ttm_adev(bo->tbo.bdev);
} else {
flags = 0x0;
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 10/11] drm/amdgpu: job is secure iff CS is secure (v3)

2019-09-25 Thread Huang, Ray
Mark a job as secure, if and only if the command
submission flag has the secure flag set.

v2: fix the null job pointer while in vmid 0
submission.
v3: Context --> Command submission.

Signed-off-by: Huang Rui 
Co-developed-by: Luben Tuikov 
Signed-off-by: Luben Tuikov 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 8 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  | 4 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 2 ++
 3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 51f3db0..0077bb3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1252,8 +1252,14 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
p->ctx->preamble_presented = true;
}
 
-   cs->out.handle = seq;
+   /* The command submission (cs) is a union, so an assignment to
+* 'out' is destructive to the cs (at least the first 8
+* bytes). For this reason, inquire about the flags before the
+* assignment to 'out'.
+*/
+   job->secure = cs->in.flags & AMDGPU_CS_FLAGS_SECURE;
job->uf_sequence = seq;
+   cs->out.handle = seq;
 
amdgpu_job_free_resources(job);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index e1dc229..cb9b650 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -210,7 +210,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
if (job && ring->funcs->emit_cntxcntl) {
status |= job->preamble_status;
status |= job->preemption_status;
-   amdgpu_ring_emit_cntxcntl(ring, status, false);
+   amdgpu_ring_emit_cntxcntl(ring, status, job->secure);
}
 
for (i = 0; i < num_ibs; ++i) {
@@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
}
 
if (ring->funcs->emit_tmz)
-   amdgpu_ring_emit_tmz(ring, false, false);
+   amdgpu_ring_emit_tmz(ring, false, job ? job->secure : false);
 
 #ifdef CONFIG_X86_64
if (!(adev->flags & AMD_IS_APU))
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index dc7ee93..aa0e375 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -63,6 +63,8 @@ struct amdgpu_job {
uint64_tuf_addr;
uint64_tuf_sequence;
 
+   /* the job is due to a secure command submission */
+   boolsecure;
 };
 
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 07/11] drm/amdgpu: add tmz bit in frame control packet

2019-09-25 Thread Huang, Ray
This patch adds tmz bit in frame control pm4 packet, and it will used in future.

Signed-off-by: Huang Rui 
Reviewed-by: Alex Deucher 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/nvd.h| 1 +
 drivers/gpu/drm/amd/amdgpu/soc15d.h | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/nvd.h b/drivers/gpu/drm/amd/amdgpu/nvd.h
index 1de9846..f3d8771 100644
--- a/drivers/gpu/drm/amd/amdgpu/nvd.h
+++ b/drivers/gpu/drm/amd/amdgpu/nvd.h
@@ -306,6 +306,7 @@
 #definePACKET3_GET_LOD_STATS   0x8E
 #definePACKET3_DRAW_MULTI_PREAMBLE 0x8F
 #definePACKET3_FRAME_CONTROL   0x90
+#  define FRAME_TMZ(1 << 0)
 #  define FRAME_CMD(x) ((x) << 28)
/*
 * x=0: tmz_begin
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15d.h 
b/drivers/gpu/drm/amd/amdgpu/soc15d.h
index edfe508..295d68c 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15d.h
+++ b/drivers/gpu/drm/amd/amdgpu/soc15d.h
@@ -286,6 +286,7 @@
 #definePACKET3_WAIT_ON_DE_COUNTER_DIFF 0x88
 #definePACKET3_SWITCH_BUFFER   0x8B
 #define PACKET3_FRAME_CONTROL  0x90
+#  define FRAME_TMZ(1 << 0)
 #  define FRAME_CMD(x) ((x) << 28)
/*
 * x=0: tmz_begin
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 08/11] drm/amdgpu: expand the emit tmz interface with trusted flag

2019-09-25 Thread Huang, Ray
This patch expands the emit_tmz function to support trusted flag while we want
to set command buffer in trusted mode.

Signed-off-by: Huang Rui 
Reviewed-by: Alex Deucher 
Acked-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  4 ++--
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 16 
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c| 13 ++---
 4 files changed, 25 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 6882eeb..54741ba 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -229,7 +229,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned 
num_ibs,
}
 
if (ring->funcs->emit_tmz)
-   amdgpu_ring_emit_tmz(ring, false);
+   amdgpu_ring_emit_tmz(ring, false, false);
 
 #ifdef CONFIG_X86_64
if (!(adev->flags & AMD_IS_APU))
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 930316e..34aa63a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -166,7 +166,7 @@ struct amdgpu_ring_funcs {
void (*emit_reg_write_reg_wait)(struct amdgpu_ring *ring,
uint32_t reg0, uint32_t reg1,
uint32_t ref, uint32_t mask);
-   void (*emit_tmz)(struct amdgpu_ring *ring, bool start);
+   void (*emit_tmz)(struct amdgpu_ring *ring, bool start, bool trusted);
/* priority functions */
void (*set_priority) (struct amdgpu_ring *ring,
  enum drm_sched_priority priority);
@@ -247,7 +247,7 @@ struct amdgpu_ring {
 #define amdgpu_ring_emit_wreg(r, d, v) (r)->funcs->emit_wreg((r), (d), (v))
 #define amdgpu_ring_emit_reg_wait(r, d, v, m) (r)->funcs->emit_reg_wait((r), 
(d), (v), (m))
 #define amdgpu_ring_emit_reg_write_reg_wait(r, d0, d1, v, m) 
(r)->funcs->emit_reg_write_reg_wait((r), (d0), (d1), (v), (m))
-#define amdgpu_ring_emit_tmz(r, b) (r)->funcs->emit_tmz((r), (b))
+#define amdgpu_ring_emit_tmz(r, b, s) (r)->funcs->emit_tmz((r), (b), (s))
 #define amdgpu_ring_pad_ib(r, ib) ((r)->funcs->pad_ib((r), (ib)))
 #define amdgpu_ring_init_cond_exec(r) (r)->funcs->init_cond_exec((r))
 #define amdgpu_ring_patch_cond_exec(r,o) (r)->funcs->patch_cond_exec((r),(o))
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index a2f4ff1..18f741b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -243,7 +243,8 @@ static int gfx_v10_0_rlc_backdoor_autoload_enable(struct 
amdgpu_device *adev);
 static int gfx_v10_0_wait_for_rlc_autoload_complete(struct amdgpu_device 
*adev);
 static void gfx_v10_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume);
 static void gfx_v10_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume);
-static void gfx_v10_0_ring_emit_tmz(struct amdgpu_ring *ring, bool start);
+static void gfx_v10_0_ring_emit_tmz(struct amdgpu_ring *ring, bool start,
+   bool trusted);
 
 static void gfx10_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t 
queue_mask)
 {
@@ -4521,7 +4522,7 @@ static void gfx_v10_0_ring_emit_cntxcntl(struct 
amdgpu_ring *ring, uint32_t flag
gfx_v10_0_ring_emit_ce_meta(ring,
flags & AMDGPU_IB_PREEMPTED ? true : false);
 
-   gfx_v10_0_ring_emit_tmz(ring, true);
+   gfx_v10_0_ring_emit_tmz(ring, true, false);
 
dw2 |= 0x8000; /* set load_enable otherwise this package is just 
NOPs */
if (flags & AMDGPU_HAVE_CTX_SWITCH) {
@@ -4679,10 +4680,17 @@ static void gfx_v10_0_ring_emit_de_meta(struct 
amdgpu_ring *ring, bool resume)
   sizeof(de_payload) >> 2);
 }
 
-static void gfx_v10_0_ring_emit_tmz(struct amdgpu_ring *ring, bool start)
+static void gfx_v10_0_ring_emit_tmz(struct amdgpu_ring *ring, bool start,
+   bool trusted)
 {
amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0));
-   amdgpu_ring_write(ring, FRAME_CMD(start ? 0 : 1)); /* frame_end */
+   /*
+* cmd = 0: frame begin
+* cmd = 1: frame end
+*/
+   amdgpu_ring_write(ring,
+ ((ring->adev->tmz.enabled && trusted) ? FRAME_TMZ : 0)
+ | FRAME_CMD(start ? 0 : 1));
 }
 
 static void gfx_v10_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 90348fb29..fa264d5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -5240,10 +5240,17 @@ static void gfx_v9_0_ring_emit_de_meta(struct 
amdgpu_ring *ring)
amdgpu_ring_write_multiple(ring,

[PATCH v2 06/11] drm/amdgpu: add function to check tmz capability (v4)

2019-09-25 Thread Huang, Ray
Add a function to check tmz capability with kernel parameter and ASIC type.

v2: use a per device tmz variable instead of global amdgpu_tmz.
v3: refine the comments for the function. (Luben)
v4: add amdgpu_tmz.c/h for future use.

Signed-off-by: Huang Rui 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/Makefile|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  3 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c| 49 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h|  3 ++
 4 files changed, 56 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile 
b/drivers/gpu/drm/amd/amdgpu/Makefile
index 91369c8..270ce82 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -55,7 +55,7 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
amdgpu_gmc.o amdgpu_mmhub.o amdgpu_xgmi.o amdgpu_csa.o amdgpu_ras.o 
amdgpu_vm_cpu.o \
amdgpu_vm_sdma.o amdgpu_pmu.o amdgpu_discovery.o amdgpu_ras_eeprom.o 
amdgpu_nbio.o \
-   amdgpu_umc.o smu_v11_0_i2c.o
+   amdgpu_umc.o smu_v11_0_i2c.o amdgpu_tmz.o
 
 amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 2535db2..e376fe5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -63,6 +63,7 @@
 #include "amdgpu_xgmi.h"
 #include "amdgpu_ras.h"
 #include "amdgpu_pmu.h"
+#include "amdgpu_tmz.h"
 
 #include 
 
@@ -1032,6 +1033,8 @@ static int amdgpu_device_check_arguments(struct 
amdgpu_device *adev)
 
adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, 
amdgpu_fw_load_type);
 
+   adev->tmz.enabled = amdgpu_is_tmz(adev);
+
return ret;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c
new file mode 100644
index 000..14a5500
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c
@@ -0,0 +1,49 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include 
+#include "amdgpu.h"
+#include "amdgpu_tmz.h"
+
+
+/**
+ * amdgpu_is_tmz - validate trust memory zone
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Return true if @dev supports trusted memory zones (TMZ), and return false if
+ * @dev does not support TMZ.
+ */
+bool amdgpu_is_tmz(struct amdgpu_device *adev)
+{
+   if (!amdgpu_tmz)
+   return false;
+
+   if (adev->asic_type < CHIP_RAVEN || adev->asic_type == CHIP_ARCTURUS) {
+   dev_warn(adev->dev, "doesn't support trusted memory zones 
(TMZ)\n");
+   return false;
+   }
+
+   dev_info(adev->dev, "TMZ feature is enabled\n");
+
+   return true;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h
index 24bbbc2..28e0517 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h
@@ -33,4 +33,7 @@ struct amdgpu_tmz {
boolenabled;
 };
 
+
+extern bool amdgpu_is_tmz(struct amdgpu_device *adev);
+
 #endif
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 04/11] drm/amdgpu: add tmz feature parameter (v2)

2019-09-25 Thread Huang, Ray
This patch adds tmz parameter to enable/disable the feature in the amdgpu kernel
module. Nomally, by default, it should be auto (rely on the hardware
capability).

But right now, it need to set "off" to avoid breaking other developers'
work because it's not totally completed.

Will set "auto" till the feature is stable and completely verified.

v2: add "auto" option for future use.

Signed-off-by: Huang Rui 
Reviewed-by: Alex Deucher 
Reviewed-by: Christian König 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 11 +++
 2 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index a1516a3..930643c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -172,6 +172,7 @@ extern int amdgpu_force_asic_type;
 #ifdef CONFIG_HSA_AMD
 extern int sched_policy;
 #endif
+extern int amdgpu_tmz;
 
 #ifdef CONFIG_DRM_AMDGPU_SI
 extern int amdgpu_si_support;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 6978d17..606f1d3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -145,6 +145,7 @@ int amdgpu_discovery = -1;
 int amdgpu_mes = 0;
 int amdgpu_noretry = 1;
 int amdgpu_force_asic_type = -1;
+int amdgpu_tmz = 0;
 
 struct amdgpu_mgpu_info mgpu_info = {
.mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),
@@ -752,6 +753,16 @@ uint amdgpu_dm_abm_level = 0;
 MODULE_PARM_DESC(abmlevel, "ABM level (0 = off (default), 1-4 = backlight 
reduction level) ");
 module_param_named(abmlevel, amdgpu_dm_abm_level, uint, 0444);
 
+/**
+ * DOC: tmz (int)
+ * Trust Memory Zone (TMZ) is a method to protect the contents being written to
+ * and read from memory.
+ *
+ * The default value: 0 (off).  TODO: change to auto till it is completed.
+ */
+MODULE_PARM_DESC(tmz, "Enable TMZ feature (-1 = auto, 0 = off (default), 1 = 
on)");
+module_param_named(tmz, amdgpu_tmz, int, 0444);
+
 static const struct pci_device_id pciidlist[] = {
 #ifdef  CONFIG_DRM_AMDGPU_SI
{0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 01/11] drm/amdgpu: add UAPI for creating encrypted buffers

2019-09-25 Thread Huang, Ray
From: Alex Deucher 

Add a flag to the GEM_CREATE ioctl to create encrypted buffers.
Buffers with this flag set will be created with the TMZ bit set
in the PTEs or engines accessing them.  This is required in order
to properly access the data from the engines.

Signed-off-by: Alex Deucher 
Reviewed-by: Huang Rui 
Reviewed-by: Christian König 
---
 include/uapi/drm/amdgpu_drm.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index f3ad429..f90b453 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -135,6 +135,11 @@ extern "C" {
  * releasing the memory
  */
 #define AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE (1 << 9)
+/* Flag that BO will be encrypted and that the TMZ bit should be
+ * set in the PTEs when mapping this buffer via GPUVM or
+ * accessing it with various hw blocks
+ */
+#define AMDGPU_GEM_CREATE_ENCRYPTED(1 << 10)
 
 struct drm_amdgpu_gem_create_in  {
/** the requested memory size */
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 03/11] drm/amdgpu: define the TMZ bit for the PTE

2019-09-25 Thread Huang, Ray
From: Alex Deucher 

Define the TMZ (encryption) bit in the page table entry (PTE) for
Raven and newer asics.

Signed-off-by: Alex Deucher 
Reviewed-by: Huang Rui 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 3352a87..4b5d283 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -53,6 +53,9 @@ struct amdgpu_bo_list_entry;
 #define AMDGPU_PTE_SYSTEM  (1ULL << 1)
 #define AMDGPU_PTE_SNOOPED (1ULL << 2)
 
+/* RV+ */
+#define AMDGPU_PTE_TMZ (1ULL << 3)
+
 /* VI only */
 #define AMDGPU_PTE_EXECUTABLE  (1ULL << 4)
 
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 02/11] drm/amdgpu: add UAPI to create secure commands (v3)

2019-09-25 Thread Huang, Ray
From: Luben Tuikov 

Add a flag to the command submission IOCTL
structure which when present indicates that this
command submission should be treated as
secure. The kernel driver uses this flag to
determine whether the engine should be
transitioned to secure or unsecure, or the work
can be submitted to a secure queue depending on
the IP.

v3: the flag is now at command submission IOCTL

Signed-off-by: Luben Tuikov 
Reviewed-by: Alex Deucher 
---
 include/uapi/drm/amdgpu_drm.h | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index f90b453..a101eea 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -207,6 +207,9 @@ union drm_amdgpu_bo_list {
 #define AMDGPU_CTX_OP_QUERY_STATE  3
 #define AMDGPU_CTX_OP_QUERY_STATE2 4
 
+/* Flag the command submission as secure */
+#define AMDGPU_CS_FLAGS_SECURE  (1 << 0)
+
 /* GPU reset status */
 #define AMDGPU_CTX_NO_RESET0
 /* this the context caused it */
@@ -562,7 +565,7 @@ struct drm_amdgpu_cs_in {
/**  Handle of resource list associated with CS */
__u32   bo_list_handle;
__u32   num_chunks;
-   __u32   _pad;
+   __u32   flags;
/** this points to __u64 * which point to cs chunks */
__u64   chunks;
 };
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 05/11] drm/amdgpu: add amdgpu_tmz data structure

2019-09-25 Thread Huang, Ray
This patch to add amdgpu_tmz structure which stores all tmz related fields.

Signed-off-by: Huang Rui 
Reviewed-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h |  6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h | 36 +
 2 files changed, 41 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 930643c..697e8e5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -89,6 +89,7 @@
 #include "amdgpu_mes.h"
 #include "amdgpu_umc.h"
 #include "amdgpu_mmhub.h"
+#include "amdgpu_tmz.h"
 
 #define MAX_GPU_INSTANCE   16
 
@@ -916,6 +917,9 @@ struct amdgpu_device {
boolenable_mes;
struct amdgpu_mes   mes;
 
+   /* tmz */
+   struct amdgpu_tmz   tmz;
+
struct amdgpu_ip_block  ip_blocks[AMDGPU_MAX_IP_NUM];
int num_ip_blocks;
struct mutexmn_lock;
@@ -927,7 +931,7 @@ struct amdgpu_device {
atomic64_t gart_pin_size;
 
/* soc15 register offset based on ip, instance and  segment */
-   uint32_t*reg_offset[MAX_HWIP][HWIP_MAX_INSTANCE];
+   uint32_t*reg_offset[MAX_HWIP][HWIP_MAX_INSTANCE];
 
const struct amdgpu_df_funcs*df_funcs;
const struct amdgpu_mmhub_funcs *mmhub_funcs;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h
new file mode 100644
index 000..24bbbc2
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __AMDGPU_TMZ_H__
+#define __AMDGPU_TMZ_H__
+
+#include "amdgpu.h"
+
+/*
+ * Trust memory zone stuff
+ */
+struct amdgpu_tmz {
+   boolenabled;
+};
+
+#endif
-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[PATCH v2 00/11] drm/amdgpu: introduce secure buffer object support (trusted memory zone)

2019-09-25 Thread Huang, Ray
Hi all,

These series of patches introduce a feature to support secure buffer object.
The Trusted Memory Zone (TMZ) is a method to protect the contents being written
to and read from memory. We use TMZ hardware memory protection scheme to
implement the secure buffer object support.

TMZ is the page-level protection that hardware will detect the TMZ bit in the
page table entry to set the current page is encrypted. With this hardware
feature, we design a BO-level protection in kernel driver to provide a new flag
AMDGPU_GEM_CREATE_ENCRYPTED to gem create ioctl to libdrm for the secure buffer
allocation. And also provide the new AMDGPU_CS_FLAGS_SECURE to indicate the
command submmission is trusted or not. If the BO is secure, then the data is
encrypted, only the trusted IP blocks such as gfx, sdma, vcn are able to
decrypt. CPU as the un-trusted IP are unable to read the secure buffer.

We will submit the new secure command later for libdrm, and create a new test
suite to verify the security feature in the libdrm unit tests.

Suite id = 11: Name 'Security Tests status: ENABLED'
Test id 1: Name: 'allocate secure buffer test status: ENABLED'
Test id 2: Name: 'graphics secure command submission status: ENABLED'

Changes from V1 -> V2:
- Change the UAPI from secure context to secure command submission for display
  server and client usage. (Thanks Luben)
- Remove ttm_mem_reg macro to get ttm_bo object.
- Move the amdgpu_bo_encrypted into amdgpu_vm_bo_update(). 

Thanks,
Ray

Alex Deucher (3):
  drm/amdgpu: add UAPI for creating encrypted buffers
  drm/amdgpu: define the TMZ bit for the PTE
  drm/amdgpu: set TMZ bits in PTEs for secure BO (v4)

Huang Rui (7):
  drm/amdgpu: add tmz feature parameter (v2)
  drm/amdgpu: add amdgpu_tmz data structure
  drm/amdgpu: add function to check tmz capability (v4)
  drm/amdgpu: add tmz bit in frame control packet
  drm/amdgpu: expand the emit tmz interface with trusted flag
  drm/amdgpu: expand the context control interface with trust flag
  drm/amdgpu: job is secure iff CS is secure (v3)

Luben Tuikov (1):
  drm/amdgpu: add UAPI to create secure commands (v3)

 drivers/gpu/drm/amd/amdgpu/Makefile|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h|  7 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  8 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |  3 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c| 11 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c| 12 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c |  4 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h|  2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 11 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h   |  9 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c| 49 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h| 39 
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |  5 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  3 ++
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 20 +---
 drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c  |  3 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c  |  3 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c  |  3 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c  | 16 +++---
 drivers/gpu/drm/amd/amdgpu/nvd.h   |  1 +
 drivers/gpu/drm/amd/amdgpu/soc15d.h|  1 +
 include/uapi/drm/amdgpu_drm.h  | 10 +-
 22 files changed, 199 insertions(+), 23 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_tmz.h

-- 
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

2019-09-25 Thread Deucher, Alexander
Reviewed-by: Alex Deucher 

From: amd-gfx  on behalf of Liu, Shaoyun 

Sent: Tuesday, September 24, 2019 6:13 PM
To: amd-gfx@lists.freedesktop.org 
Cc: Liu, Shaoyun 
Subject: [PATCH] drm/amdgpu: Add NAVI12 support from kfd side

Add device info for both navi12 PF and VF

Change-Id: Ifb4035e65c12d153fc30e593fe109f9c7e0541f4
Signed-off-by: shaoyunl 
---
 drivers/gpu/drm/amd/amdkfd/kfd_device.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index f329b82..edfbae5c 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -387,6 +387,24 @@ static const struct kfd_device_info navi10_device_info = {
 .num_sdma_queues_per_engine = 8,
 };

+static const struct kfd_device_info navi12_device_info = {
+   .asic_family = CHIP_NAVI12,
+   .asic_name = "navi12",
+   .max_pasid_bits = 16,
+   .max_no_of_hqd  = 24,
+   .doorbell_size  = 8,
+   .ih_ring_entry_size = 8 * sizeof(uint32_t),
+   .event_interrupt_class = &event_interrupt_class_v9,
+   .num_of_watch_points = 4,
+   .mqd_size_aligned = MQD_SIZE_ALIGNED,
+   .needs_iommu_device = false,
+   .supports_cwsr = true,
+   .needs_pci_atomics = false,
+   .num_sdma_engines = 2,
+   .num_xgmi_sdma_engines = 0,
+   .num_sdma_queues_per_engine = 8,
+};
+
 static const struct kfd_device_info navi14_device_info = {
 .asic_family = CHIP_NAVI14,
 .asic_name = "navi14",
@@ -425,6 +443,7 @@ static const struct kfd_device_info 
*kfd_supported_devices[][2] = {
 [CHIP_RENOIR] = {&renoir_device_info, NULL},
 [CHIP_ARCTURUS] = {&arcturus_device_info, &arcturus_device_info},
 [CHIP_NAVI10] = {&navi10_device_info, NULL},
+   [CHIP_NAVI12] = {&navi12_device_info, &navi12_device_info},
 [CHIP_NAVI14] = {&navi14_device_info, NULL},
 };

--
2.7.4

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm/amdgpu: fix error handling in amdgpu_bo_list_create

2019-09-25 Thread Christian König

Ping? Can I get at least and acked-by for this?

Thanks,
Christian.

Am 18.09.19 um 19:43 schrieb Christian König:

We need to drop normal and userptr BOs separately.

Signed-off-by: Christian König 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 7 ++-
  1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
index d497467b7fc6..94908bf269a6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
@@ -139,7 +139,12 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, 
struct drm_file *filp,
return 0;
  
  error_free:

-   while (i--) {
+   for (i = 0; i < last_entry; ++i) {
+   struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
+
+   amdgpu_bo_unref(&bo);
+   }
+   for (i = first_userptr; i < num_entries; ++i) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(array[i].tv.bo);
  
  		amdgpu_bo_unref(&bo);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amd/powerplay: update arcturus smu-driver interaction header

2019-09-25 Thread Ma, Le
Reviewed-by: Le Ma 

-Original Message-
From: amd-gfx  On Behalf Of Quan, Evan
Sent: Tuesday, September 24, 2019 12:50 PM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan 
Subject: [PATCH] drm/amd/powerplay: update arcturus smu-driver interaction 
header

To pair the latest SMU firmware.

Change-Id: I376b8c9d0c5a56a343d477a945d63ba894b984d3
Signed-off-by: Evan Quan 
---
 .../amd/powerplay/inc/smu11_driver_if_arcturus.h  | 15 ---
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  2 +-
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
index 40a51a141336..2248d682c462 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu11_driver_if_arcturus.h
@@ -137,23 +137,23 @@
 #define FEATURE_DS_SOCCLK_MASK(1 << FEATURE_DS_SOCCLK_BIT  
  )
 #define FEATURE_DS_LCLK_MASK  (1 << FEATURE_DS_LCLK_BIT
  )
 #define FEATURE_DS_FCLK_MASK  (1 << FEATURE_DS_FCLK_BIT
  )
-#define FEATURE_DS_LCLK_MASK  (1 << FEATURE_DS_LCLK_BIT
  )
+#define FEATURE_DS_UCLK_MASK  (1 << FEATURE_DS_UCLK_BIT
  )
 #define FEATURE_GFX_ULV_MASK  (1 << FEATURE_GFX_ULV_BIT
  )
-#define FEATURE_VCN_PG_MASK   (1 << FEATURE_VCN_PG_BIT 
  )
+#define FEATURE_DPM_VCN_MASK  (1 << FEATURE_DPM_VCN_BIT
  )
 #define FEATURE_RSMU_SMN_CG_MASK  (1 << FEATURE_RSMU_SMN_CG_BIT
  )
 #define FEATURE_WAFL_CG_MASK  (1 << FEATURE_WAFL_CG_BIT
  )
 
 #define FEATURE_PPT_MASK  (1 << FEATURE_PPT_BIT
  )
 #define FEATURE_TDC_MASK  (1 << FEATURE_TDC_BIT
  )
-#define FEATURE_APCC_MASK (1 << FEATURE_APCC_BIT   
  )
+#define FEATURE_APCC_PLUS_MASK(1 << FEATURE_APCC_PLUS_BIT  
  )
 #define FEATURE_VR0HOT_MASK   (1 << FEATURE_VR0HOT_BIT 
  )
 #define FEATURE_VR1HOT_MASK   (1 << FEATURE_VR1HOT_BIT 
  )
 #define FEATURE_FW_CTF_MASK   (1 << FEATURE_FW_CTF_BIT 
  )
 #define FEATURE_FAN_CONTROL_MASK  (1 << FEATURE_FAN_CONTROL_BIT
  )
 #define FEATURE_THERMAL_MASK  (1 << FEATURE_THERMAL_BIT
  )
 
-#define FEATURE_OUT_OF_BAND_MONITOR_MASK  (1 << EATURE_OUT_OF_BAND_MONITOR_BIT 
  )
-#define FEATURE_TEMP_DEPENDENT_VMIN_MASK  (1 << 
FEATURE_TEMP_DEPENDENT_VMIN_MASK )
+#define FEATURE_OUT_OF_BAND_MONITOR_MASK  (1 << 
FEATURE_OUT_OF_BAND_MONITOR_BIT   )
+#define FEATURE_TEMP_DEPENDENT_VMIN_MASK  (1 << 
+FEATURE_TEMP_DEPENDENT_VMIN_BIT )
 
 
 //FIXME need updating
@@ -806,7 +806,7 @@ typedef struct {
 
   uint32_t P2VCharzFreq[AVFS_VOLTAGE_COUNT]; // in 10KHz units
 
-  uint32_t EnabledAvfsModules[2];
+  uint32_t EnabledAvfsModules[3];
 
   uint32_t MmHubPadding[8]; // SMU internal use  } AvfsFuseOverride_t; @@ 
-869,7 +869,8 @@ typedef struct {  //#define TABLE_ACTIVITY_MONITOR_COEFF  7
 #define TABLE_OVERDRIVE   7
 #define TABLE_WAFL_XGMI_TOPOLOGY  8
-#define TABLE_COUNT   9
+#define TABLE_I2C_COMMANDS9
+#define TABLE_COUNT   10
 
 // These defines are used with the SMC_MSG_SetUclkFastSwitch message.
 typedef enum {
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h 
b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
index af1add570153..e71f6fedf3c6 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
@@ -27,7 +27,7 @@
 
 #define SMU11_DRIVER_IF_VERSION_INV 0x  #define 
SMU11_DRIVER_IF_VERSION_VG20 0x13 -#define SMU11_DRIVER_IF_VERSION_ARCT 0x0A
+#define SMU11_DRIVER_IF_VERSION_ARCT 0x0D
 #define SMU11_DRIVER_IF_VERSION_NV10 0x33  #define 
SMU11_DRIVER_IF_VERSION_NV12 0x33  #define SMU11_DRIVER_IF_VERSION_NV14 0x34
--
2.23.0

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm7amdgpu: once more fix amdgpu_bo_create_kernel_at

2019-09-25 Thread Deng, Emily
Yes, I have already tested it.

Best wishes
Emily Deng



>-Original Message-
>From: Christian König 
>Sent: Wednesday, September 25, 2019 5:36 PM
>To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>Subject: Re: [PATCH] drm7amdgpu: once more fix
>amdgpu_bo_create_kernel_at
>
>Hi Emily,
>
>have you also tested this? I don't have the hardware to test it so that would
>be rather nice to have.
>
>Thanks,
>Christian.
>
>Am 25.09.19 um 11:31 schrieb Deng, Emily:
>> Reviewed-by: Emily Deng 
>>
>>> -Original Message-
>>> From: Christian König 
>>> Sent: Tuesday, September 24, 2019 7:56 PM
>>> To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>>> Subject: [PATCH] drm7amdgpu: once more fix
>amdgpu_bo_create_kernel_at
>>>
>>> When CPU access is needed we should tell that to
>>> amdgpu_bo_create_reserved() or otherwise the access is denied later on.
>>>
>>> Signed-off-by: Christian König 
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 9 ++---
>>> 1 file changed, 6 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> index 12d2adcdf14e..f10b6175e20c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> @@ -369,7 +369,7 @@ int amdgpu_bo_create_kernel_at(struct
>>> amdgpu_device *adev,
>>> size = ALIGN(size, PAGE_SIZE);
>>>
>>> r = amdgpu_bo_create_reserved(adev, size, PAGE_SIZE, domain,
>bo_ptr,
>>> - NULL, NULL);
>>> + NULL, cpu_addr);
>>> if (r)
>>> return r;
>>>
>>> @@ -377,12 +377,15 @@ int amdgpu_bo_create_kernel_at(struct
>>> amdgpu_device *adev,
>>>  * Remove the original mem node and create a new one at the
>request
>>>  * position.
>>>  */
>>> +   if (cpu_addr)
>>> +   amdgpu_bo_kunmap(*bo_ptr);
>>> +
>>> +   ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
>>> +
>>> for (i = 0; i < (*bo_ptr)->placement.num_placement; ++i) {
>>> (*bo_ptr)->placements[i].fpfn = offset >> PAGE_SHIFT;
>>> (*bo_ptr)->placements[i].lpfn = (offset + size) >> PAGE_SHIFT;
>>> }
>>> -
>>> -   ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
>>> r = ttm_bo_mem_space(&(*bo_ptr)->tbo, &(*bo_ptr)->placement,
>>>  &(*bo_ptr)->tbo.mem, &ctx);
>>> if (r)
>>> --
>>> 2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH] drm7amdgpu: once more fix amdgpu_bo_create_kernel_at

2019-09-25 Thread Christian König

Hi Emily,

have you also tested this? I don't have the hardware to test it so that 
would be rather nice to have.


Thanks,
Christian.

Am 25.09.19 um 11:31 schrieb Deng, Emily:

Reviewed-by: Emily Deng 


-Original Message-
From: Christian König 
Sent: Tuesday, September 24, 2019 7:56 PM
To: Deng, Emily ; amd-gfx@lists.freedesktop.org
Subject: [PATCH] drm7amdgpu: once more fix amdgpu_bo_create_kernel_at

When CPU access is needed we should tell that to
amdgpu_bo_create_reserved() or otherwise the access is denied later on.

Signed-off-by: Christian König 
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 12d2adcdf14e..f10b6175e20c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -369,7 +369,7 @@ int amdgpu_bo_create_kernel_at(struct
amdgpu_device *adev,
size = ALIGN(size, PAGE_SIZE);

r = amdgpu_bo_create_reserved(adev, size, PAGE_SIZE, domain,
bo_ptr,
- NULL, NULL);
+ NULL, cpu_addr);
if (r)
return r;

@@ -377,12 +377,15 @@ int amdgpu_bo_create_kernel_at(struct
amdgpu_device *adev,
 * Remove the original mem node and create a new one at the
request
 * position.
 */
+   if (cpu_addr)
+   amdgpu_bo_kunmap(*bo_ptr);
+
+   ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
+
for (i = 0; i < (*bo_ptr)->placement.num_placement; ++i) {
(*bo_ptr)->placements[i].fpfn = offset >> PAGE_SHIFT;
(*bo_ptr)->placements[i].lpfn = (offset + size) >> PAGE_SHIFT;
}
-
-   ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
r = ttm_bo_mem_space(&(*bo_ptr)->tbo, &(*bo_ptr)->placement,
 &(*bo_ptr)->tbo.mem, &ctx);
if (r)
--
2.14.1


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm7amdgpu: once more fix amdgpu_bo_create_kernel_at

2019-09-25 Thread Deng, Emily
Reviewed-by: Emily Deng 

>-Original Message-
>From: Christian König 
>Sent: Tuesday, September 24, 2019 7:56 PM
>To: Deng, Emily ; amd-gfx@lists.freedesktop.org
>Subject: [PATCH] drm7amdgpu: once more fix amdgpu_bo_create_kernel_at
>
>When CPU access is needed we should tell that to
>amdgpu_bo_create_reserved() or otherwise the access is denied later on.
>
>Signed-off-by: Christian König 
>---
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 9 ++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>index 12d2adcdf14e..f10b6175e20c 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>@@ -369,7 +369,7 @@ int amdgpu_bo_create_kernel_at(struct
>amdgpu_device *adev,
>   size = ALIGN(size, PAGE_SIZE);
>
>   r = amdgpu_bo_create_reserved(adev, size, PAGE_SIZE, domain,
>bo_ptr,
>-NULL, NULL);
>+NULL, cpu_addr);
>   if (r)
>   return r;
>
>@@ -377,12 +377,15 @@ int amdgpu_bo_create_kernel_at(struct
>amdgpu_device *adev,
>* Remove the original mem node and create a new one at the
>request
>* position.
>*/
>+  if (cpu_addr)
>+  amdgpu_bo_kunmap(*bo_ptr);
>+
>+  ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
>+
>   for (i = 0; i < (*bo_ptr)->placement.num_placement; ++i) {
>   (*bo_ptr)->placements[i].fpfn = offset >> PAGE_SHIFT;
>   (*bo_ptr)->placements[i].lpfn = (offset + size) >> PAGE_SHIFT;
>   }
>-
>-  ttm_bo_mem_put(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem);
>   r = ttm_bo_mem_space(&(*bo_ptr)->tbo, &(*bo_ptr)->placement,
>&(*bo_ptr)->tbo.mem, &ctx);
>   if (r)
>--
>2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

RE: [PATCH] drm/amdgpu: restrict hotplug error message

2019-09-25 Thread Deng, Emily
Reviewed-by: Emily Deng 

>-Original Message-
>From: Christian König 
>Sent: Thursday, September 19, 2019 9:17 PM
>To: amd-gfx@lists.freedesktop.org
>Cc: Deng, Emily ; Zhang, Jack (Jian)
>
>Subject: [PATCH] drm/amdgpu: restrict hotplug error message
>
>We should print the error only when we are hotplugged and crash basically all
>userspace applications.
>
>Signed-off-by: Christian König 
>---
> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>index 6978d17a406b..5cb808cb8108 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>@@ -1098,7 +1098,10 @@ amdgpu_pci_remove(struct pci_dev *pdev)  {
>   struct drm_device *dev = pci_get_drvdata(pdev);
>
>-  DRM_ERROR("Device removal is currently not supported outside of
>fbcon\n");
>+#ifdef MODULE
>+  if (THIS_MODULE->state != MODULE_STATE_GOING) #endif
>+  DRM_ERROR("Hotplug removal is not supported\n");
>   drm_dev_unplug(dev);
>   drm_dev_put(dev);
>   pci_disable_device(pdev);
>--
>2.14.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] drm/amdgpu: Add modeset module option

2019-09-25 Thread Koenig, Christian
Am 25.09.19 um 10:07 schrieb Dave Airlie:
> On Sat, 31 Mar 2018 at 06:45, Takashi Iwai  wrote:
>> amdgpu driver lacks of modeset module option other drm drivers provide
>> for enforcing or disabling the driver load.  Interestingly, the
>> amdgpu_mode variable declaration is already found in the header file,
>> but the actual implementation seems to have been forgotten.
>>
>> This patch adds the missing piece.
> I'd like to land this patch, I realise people have NAKed it but for
> consistency across drivers I'm going to ask we land it or something
> like it.
>
> The main use case for this is actually where you have amdgpu crashes
> on load, and you want to debug them, people boot with nomodeset and
> then modprobe amdgpu modeset=1 to get the crash in a running system.
> This works for numerous other drivers, I'm not sure why amdgpu needs
> to be the odd one out.

Because this is essentially the wrong approach.

The correct way to prevent a module from automatically loading is to add 
modprobe.blacklist=$name to the kernel command line.

The modeset and nomodeset kernel options where used to switch between 
KMS and UMS and not to disable driver load.

We should have removed those options with the removal of UMS or 
otherwise it becomes just another ancient cruft we need to carry forward 
in potentially all drivers.

Regards,
Christian.

>
> Reviewed-by: Dave Airlie 
>
> Dave.

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [PATCH 2/2] drm/amdgpu: Add modeset module option

2019-09-25 Thread Dave Airlie
On Sat, 31 Mar 2018 at 06:45, Takashi Iwai  wrote:
>
> amdgpu driver lacks of modeset module option other drm drivers provide
> for enforcing or disabling the driver load.  Interestingly, the
> amdgpu_mode variable declaration is already found in the header file,
> but the actual implementation seems to have been forgotten.
>
> This patch adds the missing piece.

I'd like to land this patch, I realise people have NAKed it but for
consistency across drivers I'm going to ask we land it or something
like it.

The main use case for this is actually where you have amdgpu crashes
on load, and you want to debug them, people boot with nomodeset and
then modprobe amdgpu modeset=1 to get the crash in a running system.
This works for numerous other drivers, I'm not sure why amdgpu needs
to be the odd one out.

Reviewed-by: Dave Airlie 

Dave.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx