Re: Req: about Polaris with RKL platform

2021-08-09 Thread Mario Limonciello
On Mon, Aug 9, 2021 at 9:37 AM Alex Deucher  wrote:

> On Mon, Aug 9, 2021 at 9:59 AM Koba Ko  wrote:
> >
> > Previously, AMD had an issue about noise  with AMD-DG on the RKL platform
> > AMD provided a parameter.
> > #modprobe amdgpu ppfeaturemask=0xfff7bffb
> >
> >  I thought it's better to check and assign values in amd gpu.
> > Have a trouble determining the type of pch(RKL or else),
> > search in amd drm driver and can't find any about this.
> > Would someone please guide me? if there's an existing function.
> >
> > here's a proposal, check RKL PCH in amd driver,
> > the pch definitions must be splitted off from intel_pch.h in i915
> > folder to include/drm/intel_pch_definition.h
>
> Yes, something like that would work.
>

Can the issue that prompted this also happen with other ASIC with the
newer SMU families?  If so, should it probably be added to all of them
or further up in the code where the mask normally gets set from module
parameters to add the extra check there.


> Alex
>
>
> >
> > > --- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > > +++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > > @@ -1629,7 +1629,7 @@ static void smu7_init_dpm_defaults(struct
> pp_hwmgr *hwmgr)
> > >
> > > data->mclk_dpm_key_disabled = hwmgr->feature_mask &
> PP_MCLK_DPM_MASK ? false : true;
> > > data->sclk_dpm_key_disabled = hwmgr->feature_mask &
> PP_SCLK_DPM_MASK ? false : true;
> > > -   data->pcie_dpm_key_disabled = hwmgr->feature_mask &
> PP_PCIE_DPM_MASK ? false : true;
> > > +   data->pcie_dpm_key_disabled = is_rkl_pch() ||
> !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);
> > > /* need to set voltage control types before EVV patching */
> > > data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE;
> > > data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;
>


-- 
Mario Limonciello
supe...@gmail.com


Re: Req: about Polaris with RKL platform

2021-08-09 Thread Koba Ko
On Tue, Aug 10, 2021 at 12:45 PM Mario Limonciello  wrote:
>
>
>
> On Mon, Aug 9, 2021 at 9:37 AM Alex Deucher  wrote:
>>
>> On Mon, Aug 9, 2021 at 9:59 AM Koba Ko  wrote:
>> >
>> > Previously, AMD had an issue about noise  with AMD-DG on the RKL platform
>> > AMD provided a parameter.
>> > #modprobe amdgpu ppfeaturemask=0xfff7bffb
>> >
>> >  I thought it's better to check and assign values in amd gpu.
>> > Have a trouble determining the type of pch(RKL or else),
>> > search in amd drm driver and can't find any about this.
>> > Would someone please guide me? if there's an existing function.
>> >
>> > here's a proposal, check RKL PCH in amd driver,
>> > the pch definitions must be splitted off from intel_pch.h in i915
>> > folder to include/drm/intel_pch_definition.h
>>
>> Yes, something like that would work.
>
>
> Can the issue that prompted this also happen with other ASIC with the
> newer SMU families?  If so, should it probably be added to all of them
> or further up in the code where the mask normally gets set from module
> parameters to add the extra check there.

Would amd guys please clarify this?

Currently as i known,
for smu series, amd upstream this commit only for smu7 and also
provide modue parameters.
1.https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9d03730ecbc5afabfda26d4dbb014310bc4ea4d9
2. #modprobe amdgpu ppfeaturemask=0xfff7bffb

>
>>
>> Alex
>>
>>
>> >
>> > > --- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
>> > > +++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
>> > > @@ -1629,7 +1629,7 @@ static void smu7_init_dpm_defaults(struct pp_hwmgr 
>> > > *hwmgr)
>> > >
>> > > data->mclk_dpm_key_disabled = hwmgr->feature_mask & 
>> > > PP_MCLK_DPM_MASK ? false : true;
>> > > data->sclk_dpm_key_disabled = hwmgr->feature_mask & 
>> > > PP_SCLK_DPM_MASK ? false : true;
>> > > -   data->pcie_dpm_key_disabled = hwmgr->feature_mask & 
>> > > PP_PCIE_DPM_MASK ? false : true;
>> > > +   data->pcie_dpm_key_disabled = is_rkl_pch() || 
>> > > !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);
>> > > /* need to set voltage control types before EVV patching */
>> > > data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE;
>> > > data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;
>
>
>
> --
> Mario Limonciello
> supe...@gmail.com


Re: [PATCHv2 2/2] drm/amd/amdgpu: add tdr support for embeded hw_fence

2021-08-09 Thread Jingwen Chen
On Mon Aug 09, 2021 at 12:24:37PM -0400, Andrey Grodzovsky wrote:
> 
> On 2021-08-05 4:31 a.m., Jingwen Chen wrote:
> > [Why]
> > After embeded hw_fence to amdgpu_job, we need to add tdr support
> > for this feature.
> > 
> > [How]
> > 1. Add a resubmit_flag for resubmit jobs.
> > 2. Clear job fence from RCU and force complete vm flush fences in
> > pre_asic_reset
> > 3. skip dma_fence_get for resubmit jobs and add a dma_fence_put
> > for guilty jobs.
> > v2:
> > use a job_run_counter in amdgpu_job to replace the resubmit_flag in
> > drm_sched_job. When the job_run_counter >= 1, it means this job is a
> > resubmit job.
> > 
> > Signed-off-by: Jack Zhang 
> > Signed-off-by: Jingwen Chen 
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 +++-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 13 +
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  5 -
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h|  3 +++
> >   4 files changed, 27 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > index 9e53ff851496..ade2fa07a50a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > @@ -4447,7 +4447,7 @@ int amdgpu_device_mode1_reset(struct amdgpu_device 
> > *adev)
> >   int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
> >  struct amdgpu_reset_context *reset_context)
> >   {
> > -   int i, r = 0;
> > +   int i, j, r = 0;
> > struct amdgpu_job *job = NULL;
> > bool need_full_reset =
> > test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
> > @@ -4471,6 +4471,16 @@ int amdgpu_device_pre_asic_reset(struct 
> > amdgpu_device *adev,
> > if (!ring || !ring->sched.thread)
> > continue;
> > +   /*clear job fence from fence drv to avoid force_completion
> > +*leave NULL and vm flush fence in fence drv */
> > +   for (j = 0; j <= ring->fence_drv.num_fences_mask; j ++) {
> > +   struct dma_fence *old,**ptr;
> > +   ptr = &ring->fence_drv.fences[j];
> > +   old = rcu_dereference_protected(*ptr, 1);
> > +   if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, 
> > &old->flags)) {
> > +   RCU_INIT_POINTER(*ptr, NULL);
> > +   }
> > +   }
> > /* after all hw jobs are reset, hw fence is meaningless, so 
> > force_completion */
> > amdgpu_fence_driver_force_completion(ring);
> > }
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > index 5e29d797a265..c9752cf794fb 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > @@ -159,10 +159,15 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, 
> > struct dma_fence **f, struct amd
> > }
> > seq = ++ring->fence_drv.sync_seq;
> > -   dma_fence_init(fence, &amdgpu_fence_ops,
> > -  &ring->fence_drv.lock,
> > -  adev->fence_context + ring->idx,
> > -  seq);
> > +   if (job != NULL && job->job_run_counter) {
> > +   /* reinit seq for resubmitted jobs */
> > +   fence->seqno = seq;
> > +   } else {
> > +   dma_fence_init(fence, &amdgpu_fence_ops,
> > +   &ring->fence_drv.lock,
> > +   adev->fence_context + ring->idx,
> > +   seq);
> > +   }
> 
> 
> I think this should be in the first patch actually (and the counter too),
> without it the first patch is buggy.
> 
I was originally split these two patches by adding job submission
seqence and adding tdr sequence, But yes, I should merge these two
patches otherwise the tdr sequence will fail without second patch.
Will send a merged version today.

Best Regards,
Jingwen
> 
> > if (job != NULL) {
> > /* mark this fence has a parent job */
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > index 65a395060de2..19b13a65c73b 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > @@ -254,6 +254,7 @@ static struct dma_fence *amdgpu_job_run(struct 
> > drm_sched_job *sched_job)
> > dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if 
> > VRAM lost */
> > if (finished->error < 0) {
> > +   dma_fence_put(&job->hw_fence);
> 
> 
> Would put this check bellow with the job_run_counter check
> 
> 
> > DRM_INFO("Skip scheduling IBs!\n");
> > } else {
> > r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job,
> > @@ -262,7 +263,9 @@ static struct dma_fence *amdgpu_job_run(struct 
> > drm_sched_job *sched_job)
> > DRM_ERROR("Error

[PATCH] drm/amdkfd: fix random KFDSVMRangeTest.SetGetAttributesTest test failure

2021-08-09 Thread Yifan Zhang
KFDSVMRangeTest.SetGetAttributesTest randomly fails in stress test.

Note: Google Test filter = KFDSVMRangeTest.*
[==] Running 18 tests from 1 test case.
[--] Global test environment set-up.
[--] 18 tests from KFDSVMRangeTest
[ RUN  ] KFDSVMRangeTest.BasicSystemMemTest
[   OK ] KFDSVMRangeTest.BasicSystemMemTest (30 ms)
[ RUN  ] KFDSVMRangeTest.SetGetAttributesTest
[  ] Get default atrributes
/home/yifan/brahma/libhsakmt/tests/kfdtest/src/KFDSVMRangeTest.cpp:154: Failure
Value of: expectedDefaultResults[i]
  Actual: 4294967295
Expected: outputAttributes[i].value
Which is: 0
/home/yifan/brahma/libhsakmt/tests/kfdtest/src/KFDSVMRangeTest.cpp:154: Failure
Value of: expectedDefaultResults[i]
  Actual: 4294967295
Expected: outputAttributes[i].value
Which is: 0
/home/yifan/brahma/libhsakmt/tests/kfdtest/src/KFDSVMRangeTest.cpp:152: Failure
Value of: expectedDefaultResults[i]
  Actual: 4
Expected: outputAttributes[i].type
Which is: 2
[  ] Setting/Getting atrributes
[  FAILED  ]

the root cause is that svm work queue has not finished when svm_range_get_attr 
is called, thus
some garbage svm interval tree data make svm_range_get_attr get wrong result. 
Flush work queue before
iterate svm interval tree.

Signed-off-by: Yifan Zhang 
---
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index f811a3a24cd2..192e9401bed5 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -3072,6 +3072,9 @@ svm_range_get_attr(struct kfd_process *p, uint64_t start, 
uint64_t size,
pr_debug("svms 0x%p [0x%llx 0x%llx] nattr 0x%x\n", &p->svms, start,
 start + size - 1, nattr);
 
+   /* flush pending deferred work */
+   flush_work(&p->svms.deferred_list_work);
+
mmap_read_lock(mm);
r = svm_range_is_valid(p, start, size);
mmap_read_unlock(mm);
-- 
2.25.1



Re: [PATCH v2] drm/amdkfd: AIP mGPUs best prefetch location for xnack on

2021-08-09 Thread Felix Kuehling
Am 2021-08-09 um 6:21 p.m. schrieb Philip Yang:
> For xnack on, if range ACCESS or ACCESS_IN_PLACE (AIP) by single GPU, or
> range is ACCESS_IN_PLACE by mGPUs and all mGPUs connection on xgmi same
> hive, the best prefetch location is prefetch_loc GPU. Otherwise, the best
> prefetch location is always CPU because GPU can not map vram of other
> GPUs through small bar PCIe.

I don't think small-bar is really a factor here. Even with large-BAR,
our P2P mappings are not coherent like XGMI mappings are. So we wouldn't
be able to use P2P even on large-BAR systems. So I would modify this
sentence:

> Otherwise, the best
> prefetch location is always CPU because GPU can not coherently map vram
> of other GPUs through PCIe.


>
> Signed-off-by: Philip Yang 
> ---
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 35 +++-
>  1 file changed, 19 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
> b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index f811a3a24cd2..5bd51a15fb00 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -2719,22 +2719,26 @@ svm_range_add(struct kfd_process *p, uint64_t start, 
> uint64_t size,
>   return 0;
>  }
>  
> -/* svm_range_best_prefetch_location - decide the best prefetch location
> +/**
> + * svm_range_best_prefetch_location - decide the best prefetch location
>   * @prange: svm range structure
>   *
>   * For xnack off:
> - * If range map to single GPU, the best acutal location is prefetch loc, 
> which
> + * If range map to single GPU, the best prefetch location is prefetch_loc, 
> which
>   * can be CPU or GPU.
>   *
> - * If range map to multiple GPUs, only if mGPU connection on xgmi same hive,
> - * the best actual location could be prefetch_loc GPU. If mGPU connection on
> - * PCIe, the best actual location is always CPU, because GPU cannot access 
> vram
> - * of other GPUs, assuming PCIe small bar (large bar support is not 
> upstream).
> + * If range is ACCESS or ACCESS_IN_PLACE by mGPUs, only if mGPU connection on
> + * XGMI same hive, the best prefetch location is prefetch_loc GPU, othervise
> + * the best prefetch location is always CPU, because GPU can not map vram of
> + * other GPUs, assuming PCIe small bar (large bar support is not upstream).

Same as above. With that fixed, the patch is

Reviewed-by: Felix Kuehling 


>   *
>   * For xnack on:
> - * The best actual location is prefetch location. If mGPU connection on xgmi
> - * same hive, range map to multiple GPUs. Otherwise, the range only map to
> - * actual location GPU. Other GPU access vm fault will trigger migration.
> + * If range is not ACCESS_IN_PLACE by mGPUs, the best prefetch location is
> + * prefetch_loc, other GPU access will generate vm fault and trigger 
> migration.
> + *
> + * If range is ACCESS_IN_PLACE by mGPUs, only if mGPU connection on XGMI same
> + * hive, the best prefetch location is prefetch_loc GPU, otherwise the best
> + * prefetch location is always CPU, because GPU cannot map vram of other 
> GPUs.
>   *
>   * Context: Process context
>   *
> @@ -2754,11 +2758,6 @@ svm_range_best_prefetch_location(struct svm_range 
> *prange)
>  
>   p = container_of(prange->svms, struct kfd_process, svms);
>  
> - /* xnack on */
> - if (p->xnack_enabled)
> - goto out;
> -
> - /* xnack off */
>   if (!best_loc || best_loc == KFD_IOCTL_SVM_LOCATION_UNDEFINED)
>   goto out;
>  
> @@ -2768,8 +2767,12 @@ svm_range_best_prefetch_location(struct svm_range 
> *prange)
>   best_loc = 0;
>   goto out;
>   }
> - bitmap_or(bitmap, prange->bitmap_access, prange->bitmap_aip,
> -   MAX_GPU_INSTANCE);
> +
> + if (p->xnack_enabled)
> + bitmap_copy(bitmap, prange->bitmap_aip, MAX_GPU_INSTANCE);
> + else
> + bitmap_or(bitmap, prange->bitmap_access, prange->bitmap_aip,
> +   MAX_GPU_INSTANCE);
>  
>   for_each_set_bit(gpuidx, bitmap, MAX_GPU_INSTANCE) {
>   pdd = kfd_process_device_from_gpuidx(p, gpuidx);


[PATCH v4] drm/amd/amdgpu embed hw_fence into amdgpu_job

2021-08-09 Thread Jingwen Chen
From: Jack Zhang 

Why: Previously hw fence is alloced separately with job.
It caused historical lifetime issues and corner cases.
The ideal situation is to take fence to manage both job
and fence's lifetime, and simplify the design of gpu-scheduler.

How:
We propose to embed hw_fence into amdgpu_job.
1. We cover the normal job submission by this method.
2. For ib_test, and submit without a parent job keep the
legacy way to create a hw fence separately.
v2:
use AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT to show that the fence is
embeded in a job.
v3:
remove redundant variable ring in amdgpu_job
v4:
add tdr sequence support for this feature. Add a job_run_counter to
indicate whether this job is a resubmit job.

Signed-off-by: Jingwen Chen 
Signed-off-by: Jack Zhang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c  |  1 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  | 12 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 73 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 39 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h|  5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  |  2 +-
 9 files changed, 108 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 7b46ba551cb2..3003ee1c9487 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -714,7 +714,6 @@ int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, enum 
kgd_engine_type engine,
ret = dma_fence_wait(f, false);
 
 err_ib_sched:
-   dma_fence_put(f);
amdgpu_job_free(job);
 err:
return ret;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 536005bff24a..277128846dd1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1414,7 +1414,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct 
amdgpu_ring *ring)
continue;
}
job = to_amdgpu_job(s_job);
-   if (preempted && job->fence == fence)
+   if (preempted && (&job->hw_fence) == fence)
/* mark the job as preempted */
job->preemption_status |= AMDGPU_IB_PREEMPTED;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 9e53ff851496..ade2fa07a50a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -4447,7 +4447,7 @@ int amdgpu_device_mode1_reset(struct amdgpu_device *adev)
 int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
 struct amdgpu_reset_context *reset_context)
 {
-   int i, r = 0;
+   int i, j, r = 0;
struct amdgpu_job *job = NULL;
bool need_full_reset =
test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
@@ -4471,6 +4471,16 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device 
*adev,
if (!ring || !ring->sched.thread)
continue;
 
+   /*clear job fence from fence drv to avoid force_completion
+*leave NULL and vm flush fence in fence drv */
+   for (j = 0; j <= ring->fence_drv.num_fences_mask; j ++) {
+   struct dma_fence *old,**ptr;
+   ptr = &ring->fence_drv.fences[j];
+   old = rcu_dereference_protected(*ptr, 1);
+   if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, 
&old->flags)) {
+   RCU_INIT_POINTER(*ptr, NULL);
+   }
+   }
/* after all hw jobs are reset, hw fence is meaningless, so 
force_completion */
amdgpu_fence_driver_force_completion(ring);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 7495911516c2..a8302e324110 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -129,30 +129,50 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
  *
  * @ring: ring the fence is associated with
  * @f: resulting fence object
+ * @job: job the fence is embeded in
  * @flags: flags to pass into the subordinate .emit_fence() call
  *
  * Emits a fence command on the requested ring (all asics).
  * Returns 0 on success, -ENOMEM on failure.
  */
-int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct 
amdgpu_job *job,
  unsigned flags)
 {
struct amdgpu_device *adev = ring->adev;
-   struct amdgpu_fence *fence;
+   struc

RE: [PATCH] drm/amdgpu: handle VCN instances when harvesting (v2)

2021-08-09 Thread Chen, Guchun
[Public]

Reviewed-by: Guchun Chen 

Regards,
Guchun

-Original Message-
From: amd-gfx  On Behalf Of Alex Deucher
Sent: Tuesday, August 10, 2021 11:03 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Zhu, James 

Subject: [PATCH] drm/amdgpu: handle VCN instances when harvesting (v2)

There may be multiple instances and only one is harvested.

v2: fix typo in commit message

Fixes: 83a0b8639185 ("drm/amdgpu: add judgement when add ip blocks (v2)")
Bug: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1673&data=04%7C01%7Cguchun.chen%40amd.com%7Cff725e2787674280255308d95bab7325%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637641614198052677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=oNLIYokjXIL2CfqeVMevwNjj9jJJo%2BmhZJ6az8TLW2c%3D&reserved=0
Reviewed-by: James Zhu 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 43e7b61d1c5c..ada7bc19118a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device 
*adev)
  ip->major, ip->minor,
  ip->revision);
 
+   if (le16_to_cpu(ip->hw_id) == VCN_HWID)
+   adev->vcn.num_vcn_inst++;
+
for (k = 0; k < num_base_address; k++) {
/*
 * convert the endianness of base addresses in 
place, @@ -385,7 +388,7 @@ void amdgpu_discovery_harvest_ip(struct 
amdgpu_device *adev)  {
struct binary_header *bhdr;
struct harvest_table *harvest_info;
-   int i;
+   int i, vcn_harvest_count = 0;
 
bhdr = (struct binary_header *)adev->mman.discovery_bin;
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin + @@ 
-397,8 +400,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
 
switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
case VCN_HWID:
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   vcn_harvest_count++;
break;
case DMU_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; @@ 
-407,6 +409,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
break;
}
}
+   if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   }
 }
 
 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
--
2.31.1


[PATCH] drm/amdgpu: handle VCN instances when harvesting (v2)

2021-08-09 Thread Alex Deucher
There may be multiple instances and only one is harvested.

v2: fix typo in commit message

Fixes: 83a0b8639185 ("drm/amdgpu: add judgement when add ip blocks (v2)")
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/1673
Reviewed-by: James Zhu 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 43e7b61d1c5c..ada7bc19118a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device 
*adev)
  ip->major, ip->minor,
  ip->revision);
 
+   if (le16_to_cpu(ip->hw_id) == VCN_HWID)
+   adev->vcn.num_vcn_inst++;
+
for (k = 0; k < num_base_address; k++) {
/*
 * convert the endianness of base addresses in 
place,
@@ -385,7 +388,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
 {
struct binary_header *bhdr;
struct harvest_table *harvest_info;
-   int i;
+   int i, vcn_harvest_count = 0;
 
bhdr = (struct binary_header *)adev->mman.discovery_bin;
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
@@ -397,8 +400,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
 
switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
case VCN_HWID:
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   vcn_harvest_count++;
break;
case DMU_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
@@ -407,6 +409,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device 
*adev)
break;
}
}
+   if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   }
 }
 
 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
-- 
2.31.1



RE: [PATCH] drm/amdgpu: handle VCN instances when harvesting

2021-08-09 Thread Chen, Guchun
[Public]

A spelling typo in commit body.

There may be multiple instances an only one is harvested.

s/an/and

Regards,
Guchun

-Original Message-
From: amd-gfx  On Behalf Of Alex Deucher
Sent: Tuesday, August 10, 2021 10:05 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Zhu, James 

Subject: [PATCH] drm/amdgpu: handle VCN instances when harvesting

There may be multiple instances an only one is harvested.

Fixes: 83a0b8639185 ("drm/amdgpu: add judgement when add ip blocks (v2)")
Bug: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1673&data=04%7C01%7Cguchun.chen%40amd.com%7Cd3b39300779148333e6008d95ba34697%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637641579088850586%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Z0tcI%2BPoHB2qwA2PVN4YBg5IjxLiaKdsm4KgE%2Bvf6WE%3D&reserved=0
Reviewed-by: James Zhu 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 43e7b61d1c5c..ada7bc19118a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device 
*adev)
  ip->major, ip->minor,
  ip->revision);
 
+   if (le16_to_cpu(ip->hw_id) == VCN_HWID)
+   adev->vcn.num_vcn_inst++;
+
for (k = 0; k < num_base_address; k++) {
/*
 * convert the endianness of base addresses in 
place, @@ -385,7 +388,7 @@ void amdgpu_discovery_harvest_ip(struct 
amdgpu_device *adev)  {
struct binary_header *bhdr;
struct harvest_table *harvest_info;
-   int i;
+   int i, vcn_harvest_count = 0;
 
bhdr = (struct binary_header *)adev->mman.discovery_bin;
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin + @@ 
-397,8 +400,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
 
switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
case VCN_HWID:
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   vcn_harvest_count++;
break;
case DMU_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; @@ 
-407,6 +409,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
break;
}
}
+   if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   }
 }
 
 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
--
2.31.1


[PATCH] drm/amd/display: remove variable backlight

2021-08-09 Thread zhaoxiao
The variable backlight is being initialized with a value that
is never read, it is being re-assigned immediately afterwards.
Clean up the code by removing the need for variable backlight.

Signed-off-by: zhaoxiao 
---
 drivers/gpu/drm/amd/display/dc/dce/dce_abm.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c 
b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
index 874b132fe1d7..0808433185f8 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
@@ -177,23 +177,21 @@ static void dce_abm_init(struct abm *abm, uint32_t 
backlight)
 static unsigned int dce_abm_get_current_backlight(struct abm *abm)
 {
struct dce_abm *abm_dce = TO_DCE_ABM(abm);
-   unsigned int backlight = REG_READ(BL1_PWM_CURRENT_ABM_LEVEL);
 
/* return backlight in hardware format which is unsigned 17 bits, with
 * 1 bit integer and 16 bit fractional
 */
-   return backlight;
+   return REG_READ(BL1_PWM_CURRENT_ABM_LEVEL);
 }
 
 static unsigned int dce_abm_get_target_backlight(struct abm *abm)
 {
struct dce_abm *abm_dce = TO_DCE_ABM(abm);
-   unsigned int backlight = REG_READ(BL1_PWM_TARGET_ABM_LEVEL);
 
/* return backlight in hardware format which is unsigned 17 bits, with
 * 1 bit integer and 16 bit fractional
 */
-   return backlight;
+   return REG_READ(BL1_PWM_TARGET_ABM_LEVEL);
 }
 
 static bool dce_abm_set_level(struct abm *abm, uint32_t level)
-- 
2.20.1





Re: [PATCH 06/11] x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()

2021-08-09 Thread Kuppuswamy, Sathyanarayanan




On 8/9/21 2:59 PM, Tom Lendacky wrote:

Not sure how TDX will handle AP booting, are you sure it needs this
special setup as well? Otherwise a check for SEV-ES would be better
instead of the generic PATTR_GUEST_PROT_STATE.

Yes, I'm not sure either. I figure that change can be made, if needed, as
part of the TDX support.


We don't plan to set PROT_STATE. So it does not affect TDX.
For SMP, we use MADT ACPI table for AP booting.

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer


[PATCH] drm/amdgpu: handle VCN instances when harvesting

2021-08-09 Thread Alex Deucher
There may be multiple instances an only one is harvested.

Fixes: 83a0b8639185 ("drm/amdgpu: add judgement when add ip blocks (v2)")
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/1673
Reviewed-by: James Zhu 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 43e7b61d1c5c..ada7bc19118a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device 
*adev)
  ip->major, ip->minor,
  ip->revision);
 
+   if (le16_to_cpu(ip->hw_id) == VCN_HWID)
+   adev->vcn.num_vcn_inst++;
+
for (k = 0; k < num_base_address; k++) {
/*
 * convert the endianness of base addresses in 
place,
@@ -385,7 +388,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
 {
struct binary_header *bhdr;
struct harvest_table *harvest_info;
-   int i;
+   int i, vcn_harvest_count = 0;
 
bhdr = (struct binary_header *)adev->mman.discovery_bin;
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
@@ -397,8 +400,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
 
switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
case VCN_HWID:
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
-   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   vcn_harvest_count++;
break;
case DMU_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
@@ -407,6 +409,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device 
*adev)
break;
}
}
+   if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
+   adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
+   }
 }
 
 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
-- 
2.31.1



[PATCH v2] drm/amdkfd: AIP mGPUs best prefetch location for xnack on

2021-08-09 Thread Philip Yang
For xnack on, if range ACCESS or ACCESS_IN_PLACE (AIP) by single GPU, or
range is ACCESS_IN_PLACE by mGPUs and all mGPUs connection on xgmi same
hive, the best prefetch location is prefetch_loc GPU. Otherwise, the best
prefetch location is always CPU because GPU can not map vram of other
GPUs through small bar PCIe.

Signed-off-by: Philip Yang 
---
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 35 +++-
 1 file changed, 19 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index f811a3a24cd2..5bd51a15fb00 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -2719,22 +2719,26 @@ svm_range_add(struct kfd_process *p, uint64_t start, 
uint64_t size,
return 0;
 }
 
-/* svm_range_best_prefetch_location - decide the best prefetch location
+/**
+ * svm_range_best_prefetch_location - decide the best prefetch location
  * @prange: svm range structure
  *
  * For xnack off:
- * If range map to single GPU, the best acutal location is prefetch loc, which
+ * If range map to single GPU, the best prefetch location is prefetch_loc, 
which
  * can be CPU or GPU.
  *
- * If range map to multiple GPUs, only if mGPU connection on xgmi same hive,
- * the best actual location could be prefetch_loc GPU. If mGPU connection on
- * PCIe, the best actual location is always CPU, because GPU cannot access vram
- * of other GPUs, assuming PCIe small bar (large bar support is not upstream).
+ * If range is ACCESS or ACCESS_IN_PLACE by mGPUs, only if mGPU connection on
+ * XGMI same hive, the best prefetch location is prefetch_loc GPU, othervise
+ * the best prefetch location is always CPU, because GPU can not map vram of
+ * other GPUs, assuming PCIe small bar (large bar support is not upstream).
  *
  * For xnack on:
- * The best actual location is prefetch location. If mGPU connection on xgmi
- * same hive, range map to multiple GPUs. Otherwise, the range only map to
- * actual location GPU. Other GPU access vm fault will trigger migration.
+ * If range is not ACCESS_IN_PLACE by mGPUs, the best prefetch location is
+ * prefetch_loc, other GPU access will generate vm fault and trigger migration.
+ *
+ * If range is ACCESS_IN_PLACE by mGPUs, only if mGPU connection on XGMI same
+ * hive, the best prefetch location is prefetch_loc GPU, otherwise the best
+ * prefetch location is always CPU, because GPU cannot map vram of other GPUs.
  *
  * Context: Process context
  *
@@ -2754,11 +2758,6 @@ svm_range_best_prefetch_location(struct svm_range 
*prange)
 
p = container_of(prange->svms, struct kfd_process, svms);
 
-   /* xnack on */
-   if (p->xnack_enabled)
-   goto out;
-
-   /* xnack off */
if (!best_loc || best_loc == KFD_IOCTL_SVM_LOCATION_UNDEFINED)
goto out;
 
@@ -2768,8 +2767,12 @@ svm_range_best_prefetch_location(struct svm_range 
*prange)
best_loc = 0;
goto out;
}
-   bitmap_or(bitmap, prange->bitmap_access, prange->bitmap_aip,
- MAX_GPU_INSTANCE);
+
+   if (p->xnack_enabled)
+   bitmap_copy(bitmap, prange->bitmap_aip, MAX_GPU_INSTANCE);
+   else
+   bitmap_or(bitmap, prange->bitmap_access, prange->bitmap_aip,
+ MAX_GPU_INSTANCE);
 
for_each_set_bit(gpuidx, bitmap, MAX_GPU_INSTANCE) {
pdd = kfd_process_device_from_gpuidx(p, gpuidx);
-- 
2.17.1



Re: [PATCH 00/11] Implement generic prot_guest_has() helper function

2021-08-09 Thread Tom Lendacky
On 8/8/21 8:41 PM, Kuppuswamy, Sathyanarayanan wrote:
> Hi Tom,
> 
> On 7/27/21 3:26 PM, Tom Lendacky wrote:
>> This patch series provides a generic helper function, prot_guest_has(),
>> to replace the sme_active(), sev_active(), sev_es_active() and
>> mem_encrypt_active() functions.
>>
>> It is expected that as new protected virtualization technologies are
>> added to the kernel, they can all be covered by a single function call
>> instead of a collection of specific function calls all called from the
>> same locations.
>>
>> The powerpc and s390 patches have been compile tested only. Can the
>> folks copied on this series verify that nothing breaks for them.
> 
> With this patch set, select ARCH_HAS_PROTECTED_GUEST and set
> CONFIG_AMD_MEM_ENCRYPT=n, creates following error.
> 
> ld: arch/x86/mm/ioremap.o: in function `early_memremap_is_setup_data':
> arch/x86/mm/ioremap.c:672: undefined reference to `early_memremap_decrypted'
> 
> It looks like early_memremap_is_setup_data() is not protected with
> appropriate config.

Ok, thanks for finding that. I'll fix that.

Thanks,
Tom

> 
> 
>>
>> Cc: Andi Kleen 
>> Cc: Andy Lutomirski 
>> Cc: Ard Biesheuvel 
>> Cc: Baoquan He 
>> Cc: Benjamin Herrenschmidt 
>> Cc: Borislav Petkov 
>> Cc: Christian Borntraeger 
>> Cc: Daniel Vetter 
>> Cc: Dave Hansen 
>> Cc: Dave Young 
>> Cc: David Airlie 
>> Cc: Heiko Carstens 
>> Cc: Ingo Molnar 
>> Cc: Joerg Roedel 
>> Cc: Maarten Lankhorst 
>> Cc: Maxime Ripard 
>> Cc: Michael Ellerman 
>> Cc: Paul Mackerras 
>> Cc: Peter Zijlstra 
>> Cc: Thomas Gleixner 
>> Cc: Thomas Zimmermann 
>> Cc: Vasily Gorbik 
>> Cc: VMware Graphics 
>> Cc: Will Deacon 
>>
>> ---
>>
>> Patches based on:
>>   
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftip%2Ftip.git&data=04%7C01%7Cthomas.lendacky%40amd.com%7C563b5e30a3254f6739aa08d95ad6e242%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637640701228434514%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=vx9v4EmYqVTsJ7KSr97gQaBWJ%2Fq%2BE9NOzXMhe3Fp7T8%3D&reserved=0
>> master
>>    commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
>>
>> Tom Lendacky (11):
>>    mm: Introduce a function to check for virtualization protection
>>  features
>>    x86/sev: Add an x86 version of prot_guest_has()
>>    powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
>>    x86/sme: Replace occurrences of sme_active() with prot_guest_has()
>>    x86/sev: Replace occurrences of sev_active() with prot_guest_has()
>>    x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
>>    treewide: Replace the use of mem_encrypt_active() with
>>  prot_guest_has()
>>    mm: Remove the now unused mem_encrypt_active() function
>>    x86/sev: Remove the now unused mem_encrypt_active() function
>>    powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
>>  function
>>    s390/mm: Remove the now unused mem_encrypt_active() function
>>
>>   arch/Kconfig   |  3 ++
>>   arch/powerpc/include/asm/mem_encrypt.h |  5 --
>>   arch/powerpc/include/asm/protected_guest.h | 30 +++
>>   arch/powerpc/platforms/pseries/Kconfig |  1 +
>>   arch/s390/include/asm/mem_encrypt.h    |  2 -
>>   arch/x86/Kconfig   |  1 +
>>   arch/x86/include/asm/kexec.h   |  2 +-
>>   arch/x86/include/asm/mem_encrypt.h | 13 +
>>   arch/x86/include/asm/protected_guest.h | 27 ++
>>   arch/x86/kernel/crash_dump_64.c    |  4 +-
>>   arch/x86/kernel/head64.c   |  4 +-
>>   arch/x86/kernel/kvm.c  |  3 +-
>>   arch/x86/kernel/kvmclock.c |  4 +-
>>   arch/x86/kernel/machine_kexec_64.c | 19 +++
>>   arch/x86/kernel/pci-swiotlb.c  |  9 ++--
>>   arch/x86/kernel/relocate_kernel_64.S   |  2 +-
>>   arch/x86/kernel/sev.c  |  6 +--
>>   arch/x86/kvm/svm/svm.c |  3 +-
>>   arch/x86/mm/ioremap.c  | 16 +++---
>>   arch/x86/mm/mem_encrypt.c  | 60 +++---
>>   arch/x86/mm/mem_encrypt_identity.c |  3 +-
>>   arch/x86/mm/pat/set_memory.c   |  3 +-
>>   arch/x86/platform/efi/efi_64.c |  9 ++--
>>   arch/x86/realmode/init.c   |  8 +--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c    |  4 +-
>>   drivers/gpu/drm/drm_cache.c    |  4 +-
>>   drivers/gpu/drm/vmwgfx/vmwgfx_drv.c    |  4 +-
>>   drivers/gpu/drm/vmwgfx/vmwgfx_msg.c    |  6 +--
>>   drivers/iommu/amd/init.c   |  7 +--
>>   drivers/iommu/amd/iommu.c  |  3 +-
>>   drivers/iommu/amd/iommu_v2.c   |  3 +-
>>   drivers/iommu/iommu.c  |  3 +-
>>   fs/proc/vmcore.c   |  6 +--
>>   include/linux/mem_encrypt.h    |  4 --
>>   incl

Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-09 Thread Tom Lendacky
On 8/2/21 7:42 AM, Christophe Leroy wrote:
> 
> 
> Le 28/07/2021 à 00:26, Tom Lendacky a écrit :
>> Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
>> with the PATTR_MEM_ENCRYPT attribute.
> 
> 
> What about
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Flinuxppc-dev%2Fpatch%2F20210730114231.23445-1-will%40kernel.org%2F&data=04%7C01%7Cthomas.lendacky%40amd.com%7C1198d62463e04a27be5908d955b30433%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635049667233612%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Erpu4Du05sVYkYuAfTkXdLvq48%2FlfLS2q%2FZW8DG3tFw%3D&reserved=0>
>  ?

Ah, looks like that just went into the PPC tree and isn't part of the tip
tree. I'll have to look into how to handle that one.

Thanks,
Tom

> 
> Christophe
> 
> 
>>
>> Cc: Thomas Gleixner 
>> Cc: Ingo Molnar 
>> Cc: Borislav Petkov 
>> Cc: Dave Hansen 
>> Cc: Andy Lutomirski 
>> Cc: Peter Zijlstra 
>> Cc: David Airlie 
>> Cc: Daniel Vetter 
>> Cc: Maarten Lankhorst 
>> Cc: Maxime Ripard 
>> Cc: Thomas Zimmermann 
>> Cc: VMware Graphics 
>> Cc: Joerg Roedel 
>> Cc: Will Deacon 
>> Cc: Dave Young 
>> Cc: Baoquan He 
>> Signed-off-by: Tom Lendacky 
>> ---
>>   arch/x86/kernel/head64.c    | 4 ++--
>>   arch/x86/mm/ioremap.c   | 4 ++--
>>   arch/x86/mm/mem_encrypt.c   | 5 ++---
>>   arch/x86/mm/pat/set_memory.c    | 3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
>>   drivers/gpu/drm/drm_cache.c | 4 ++--
>>   drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
>>   drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
>>   drivers/iommu/amd/iommu.c   | 3 ++-
>>   drivers/iommu/amd/iommu_v2.c    | 3 ++-
>>   drivers/iommu/iommu.c   | 3 ++-
>>   fs/proc/vmcore.c    | 6 +++---
>>   kernel/dma/swiotlb.c    | 4 ++--
>>   13 files changed, 29 insertions(+), 24 deletions(-)
>>
>> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
>> index de01903c3735..cafed6456d45 100644
>> --- a/arch/x86/kernel/head64.c
>> +++ b/arch/x86/kernel/head64.c
>> @@ -19,7 +19,7 @@
>>   #include 
>>   #include 
>>   #include 
>> -#include 
>> +#include 
>>   #include 
>>     #include 
>> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long
>> physaddr,
>>    * there is no need to zero it after changing the memory encryption
>>    * attribute.
>>    */
>> -    if (mem_encrypt_active()) {
>> +    if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
>>   vaddr = (unsigned long)__start_bss_decrypted;
>>   vaddr_end = (unsigned long)__end_bss_decrypted;
>>   for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
>> diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
>> index 0f2d5ace5986..5e1c1f5cbbe8 100644
>> --- a/arch/x86/mm/ioremap.c
>> +++ b/arch/x86/mm/ioremap.c
>> @@ -693,7 +693,7 @@ static bool __init
>> early_memremap_is_setup_data(resource_size_t phys_addr,
>>   bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned
>> long size,
>>    unsigned long flags)
>>   {
>> -    if (!mem_encrypt_active())
>> +    if (!prot_guest_has(PATTR_MEM_ENCRYPT))
>>   return true;
>>     if (flags & MEMREMAP_ENC)
>> @@ -723,7 +723,7 @@ pgprot_t __init
>> early_memremap_pgprot_adjust(resource_size_t phys_addr,
>>   {
>>   bool encrypted_prot;
>>   -    if (!mem_encrypt_active())
>> +    if (!prot_guest_has(PATTR_MEM_ENCRYPT))
>>   return prot;
>>     encrypted_prot = true;
>> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
>> index 451de8e84fce..0f1533dbe81c 100644
>> --- a/arch/x86/mm/mem_encrypt.c
>> +++ b/arch/x86/mm/mem_encrypt.c
>> @@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long
>> vaddr, unsigned long size)
>>   /*
>>    * SME and SEV are very similar but they are not the same, so there are
>>    * times that the kernel will need to distinguish between SME and SEV.
>> The
>> - * sme_active() and sev_active() functions are used for this.  When a
>> - * distinction isn't needed, the mem_encrypt_active() function can be
>> used.
>> + * sme_active() and sev_active() functions are used for this.
>>    *
>>    * The trampoline code is a good example for this requirement.  Before
>>    * paging is activated, SME will access all memory as decrypted, but SEV
>> @@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
>>    * The unused memory range was mapped decrypted, change the
>> encryption
>>    * attribute from decrypted to encrypted before freeing it.
>>    */
>> -    if (mem_encrypt_active()) {
>> +    if (sme_me_mask) {
>>   r = set_memory_encrypted(vaddr, npages);
>>   if (r) {
>>   pr_warn("failed to free unused decrypted pages\n");
>> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c

Re: [PATCH 06/11] x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()

2021-08-09 Thread Tom Lendacky
On 8/2/21 5:45 AM, Joerg Roedel wrote:
> On Tue, Jul 27, 2021 at 05:26:09PM -0500, Tom Lendacky wrote:
>> @@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct 
>> trampoline_header *th)
>>  if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
>>  th->flags |= TH_FLAGS_SME_ACTIVE;
>>  
>> -if (sev_es_active()) {
>> +if (prot_guest_has(PATTR_GUEST_PROT_STATE)) {
>>  /*
>>   * Skip the call to verify_cpu() in secondary_startup_64 as it
>>   * will cause #VC exceptions when the AP can't handle them yet.
> 
> Not sure how TDX will handle AP booting, are you sure it needs this
> special setup as well? Otherwise a check for SEV-ES would be better
> instead of the generic PATTR_GUEST_PROT_STATE.

Yes, I'm not sure either. I figure that change can be made, if needed, as
part of the TDX support.

Thanks,
Tom

> 
> Regards,
> 
> Joerg
> 


Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-09 Thread Tom Lendacky
On 7/30/21 5:34 PM, Sean Christopherson wrote:
> On Tue, Jul 27, 2021, Tom Lendacky wrote:
>> @@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
>>   * The unused memory range was mapped decrypted, change the encryption
>>   * attribute from decrypted to encrypted before freeing it.
>>   */
>> -if (mem_encrypt_active()) {
>> +if (sme_me_mask) {
> 
> Any reason this uses sme_me_mask?  The helper it calls, 
> __set_memory_enc_dec(),
> uses prot_guest_has(PATTR_MEM_ENCRYPT) so I assume it's available?

Probably just a slip on my part. I was debating at one point calling the
helper vs. referencing the variables/functions directly in the
mem_encrypt.c file.

Thanks,
Tom

> 
>>  r = set_memory_encrypted(vaddr, npages);
>>  if (r) {
>>  pr_warn("failed to free unused decrypted pages\n");
> 


[PATCH] drm/amdkfd: CWSR with software scheduler

2021-08-09 Thread Mukul Joshi
This patch adds support to program trap handler settings
when loading driver with software scheduler (sched_policy=2).

Signed-off-by: Mukul Joshi 
Suggested-by: Jay Cornwall 
---
 .../drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c| 31 +
 .../drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c  | 31 +
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 33 ++-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 20 +--
 .../gpu/drm/amd/include/kgd_kfd_interface.h   |  3 ++
 5 files changed, 115 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
index 491acdf92f73..960acf68150a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.c
@@ -560,6 +560,9 @@ static int kgd_hqd_destroy(struct kgd_dev *kgd, void *mqd,
case KFD_PREEMPT_TYPE_WAVEFRONT_RESET:
type = RESET_WAVES;
break;
+   case KFD_PREEMPT_TYPE_WAVEFRONT_SAVE:
+   type = SAVE_WAVES;
+   break;
default:
type = DRAIN_PIPE;
break;
@@ -754,6 +757,33 @@ static void set_vm_context_page_table_base(struct kgd_dev 
*kgd, uint32_t vmid,
adev->gfxhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
 }
 
+static void program_trap_handler_settings(struct kgd_dev *kgd,
+   uint32_t vmid, uint64_t tba_addr, uint64_t tma_addr)
+{
+   struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+   lock_srbm(kgd, 0, 0, 0, vmid);
+
+   /*
+* Program TBA registers
+*/
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TBA_LO),
+   lower_32_bits(tba_addr >> 8));
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TBA_HI),
+   upper_32_bits(tba_addr >> 8) |
+   (1 << SQ_SHADER_TBA_HI__TRAP_EN__SHIFT));
+
+   /*
+* Program TMA registers
+*/
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TMA_LO),
+   lower_32_bits(tma_addr >> 8));
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TMA_HI),
+   upper_32_bits(tma_addr >> 8));
+
+   unlock_srbm(kgd);
+}
+
 const struct kfd2kgd_calls gfx_v10_kfd2kgd = {
.program_sh_mem_settings = kgd_program_sh_mem_settings,
.set_pasid_vmid_mapping = kgd_set_pasid_vmid_mapping,
@@ -774,4 +804,5 @@ const struct kfd2kgd_calls gfx_v10_kfd2kgd = {
.get_atc_vmid_pasid_mapping_info =
get_atc_vmid_pasid_mapping_info,
.set_vm_context_page_table_base = set_vm_context_page_table_base,
+   .program_trap_handler_settings = program_trap_handler_settings,
 };
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
index 1f5620cc3570..dac0d751d5af 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.c
@@ -537,6 +537,9 @@ static int hqd_destroy_v10_3(struct kgd_dev *kgd, void *mqd,
case KFD_PREEMPT_TYPE_WAVEFRONT_RESET:
type = RESET_WAVES;
break;
+   case KFD_PREEMPT_TYPE_WAVEFRONT_SAVE:
+   type = SAVE_WAVES;
+   break;
default:
type = DRAIN_PIPE;
break;
@@ -658,6 +661,33 @@ static void set_vm_context_page_table_base_v10_3(struct 
kgd_dev *kgd, uint32_t v
adev->gfxhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
 }
 
+static void program_trap_handler_settings_v10_3(struct kgd_dev *kgd,
+   uint32_t vmid, uint64_t tba_addr, uint64_t tma_addr)
+{
+   struct amdgpu_device *adev = get_amdgpu_device(kgd);
+
+   lock_srbm(kgd, 0, 0, 0, vmid);
+
+   /*
+* Program TBA registers
+*/
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TBA_LO),
+   lower_32_bits(tba_addr >> 8));
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TBA_HI),
+   upper_32_bits(tba_addr >> 8) |
+   (1 << SQ_SHADER_TBA_HI__TRAP_EN__SHIFT));
+
+   /*
+* Program TMA registers
+*/
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TMA_LO),
+   lower_32_bits(tma_addr >> 8));
+   WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_SHADER_TMA_HI),
+upper_32_bits(tma_addr >> 8));
+
+   unlock_srbm(kgd);
+}
+
 #if 0
 uint32_t enable_debug_trap_v10_3(struct kgd_dev *kgd,
uint32_t trap_debug_wave_launch_mode,
@@ -820,6 +850,7 @@ const struct kfd2kgd_calls gfx_v10_3_kfd2kgd = {
.address_watch_get_offset = address_watch_get_offset_v10_3,
.get_atc_vmid_pasid_mapping_info = NULL,
.set_vm_context_page_table_base = set_vm_context_page_table_base_v10_3,
+   .program_trap_handler_settings = program_trap_handl

Re: [PATCH] drm/amdgpu: Removed unnecessary if statement

2021-08-09 Thread Alex Deucher
On Mon, Aug 9, 2021 at 9:59 AM Sergio Miguéns Iglesias
 wrote:
>
> There was an "if" statement that did nothing so it was removed.
>
> Signed-off-by: Sergio Miguéns Iglesias 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> index 09b048647523..5eb3869d029e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
> @@ -273,9 +273,6 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
> return 0;
>
>  out:
> -   if (abo) {
> -
> -   }
> if (fb && ret) {
> drm_gem_object_put(gobj);
> drm_framebuffer_unregister_private(fb);
> --
> 2.32.0
>


Re: [PATCH 0/4] Replace usage of sprintf with sysfs_emit in hwmgr powerplay

2021-08-09 Thread Alex Deucher
On Sun, Aug 8, 2021 at 1:33 AM Darren Powell  wrote:
>
>
> === Description ===
> Replace usage of sprintf with sysfs_emit in hwmgr powerplay
>
> === Test System ===
> * DESKTOP(AMD FX-8350 + VEGA10(687F/c3), BIOS: F2)
>  + ISO(Ubuntu 20.04.2 LTS)
>  + Kernel(5.13.0-gb1d634be9673-fdoagd5f)
>
>
> === Patch Summary ===
>linux: (g...@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 
> 2f56b0d631eb
> + 0ede8d563c58 amdgpu/pm: Replace vega10 usage of sprintf with sysfs_emit
> + 1d666a0652a1 amdgpu/pm: Replace vega12,20 usage of sprintf with 
> sysfs_emit
> + 8bad9ffba08b amdgpu/pm: Replace hwmgr smu usage of sprintf with 
> sysfs_emit
> + 773733df2f32 amdgpu/pm: Replace amdgpu_pm usage of sprintf with 
> sysfs_emit
>
>
> === General Test for each platform ===
> AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
> AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print 
> $9}'`
> HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
> LOGFILE=pp_printf.test.log
>
> lspci -nn | grep "VGA\|Display"  > $LOGFILE
> FILES="pp_dpm_sclk
> pp_features
> pp_power_profile_mode "
>
> for f in $FILES
> do
>   echo === $f === >> $LOGFILE
>   cat $HWMON_DIR/device/$f >> $LOGFILE
> done
> cat $LOGFILE
>
> Darren Powell (4):
>   amdgpu/pm: Replace vega10 usage of sprintf with sysfs_emit
>   amdgpu/pm: Replace vega12,20 usage of sprintf with sysfs_emit
>   amdgpu/pm: Replace hwmgr smu usage of sprintf with sysfs_emit
>   amdgpu/pm: Replace amdgpu_pm usage of sprintf with sysfs_emit

Series is:
Reviewed-by: Alex Deucher 

>
>  drivers/gpu/drm/amd/pm/amdgpu_pm.c| 16 ++--
>  .../drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c  | 22 +++---
>  .../drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c   | 38 +-
>  .../drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c   |  7 +-
>  .../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 38 +-
>  .../drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c | 14 ++--
>  .../drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c | 74 +--
>  7 files changed, 105 insertions(+), 104 deletions(-)
>
>
> base-commit: 2f56b0d631eba0e76cfc789d85cc5632256ad42d
> --
> 2.32.0
>


Re: [PATCH v2 0/3] Replace usage of sprintf with sysfs_emit in swsmu powerplay

2021-08-09 Thread Alex Deucher
On Sun, Aug 8, 2021 at 1:30 AM Darren Powell  wrote:
>
>
> === Description ===
> Replace usage of sprintf with sysfs_emit in swsmu powerplay
>
>   v2: rebased on 2f56b0d631eb
>
> === Test System ===
> * DESKTOP(AMD FX-8350 + NAVI10(731F/ca), BIOS: F2)
>  + ISO(Ubuntu 20.04.2 LTS)
>  + Kernel(5.13.0-gb1d634be9673-fdoagd5f)
>
>
> === Patch Summary ===
>linux: (g...@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 
> 2f56b0d631eb
> + c4a20b3363cd amdgpu/pm: Replace navi10 usage of sprintf with sysfs_emit
> + cd2e3983959b amdgpu/pm: Replace smu11 usage of sprintf with sysfs_emit
> + bd82d29a9635 amdgpu/pm: Replace smu12/13 usage of sprintf with 
> sysfs_emit
>
>
> === General Test for each platform ===
> AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
> AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print 
> $9}'`
> HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
> LOGFILE=pp_printf.test.log
>
> lspci -nn | grep "VGA\|Display"  > $LOGFILE
> FILES="pp_dpm_sclk
> pp_sclk_od
> pp_mclk_od
> pp_dpm_pcie
> pp_od_clk_voltage
> pp_power_profile_mode "
>
> for f in $FILES
> do
>   echo === $f === >> $LOGFILE
>   cat $HWMON_DIR/device/$f >> $LOGFILE
> done
> cat $LOGFILE
>
> Darren Powell (3):
>   amdgpu/pm: Replace navi10 usage of sprintf with sysfs_emit
>   amdgpu/pm: Replace smu11 usage of sprintf with sysfs_emit
>   amdgpu/pm: Replace smu12/13 usage of sprintf with sysfs_emit

Series is:
Reviewed-by: Alex Deucher 

>
>  .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c | 26 
>  .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 61 ++-
>  .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 34 +--
>  .../gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c  | 46 +++---
>  .../gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c   | 20 +++---
>  .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c| 21 +++
>  .../drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c  | 14 ++---
>  drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c|  6 +-
>  8 files changed, 115 insertions(+), 113 deletions(-)
>
>
> base-commit: 2f56b0d631eba0e76cfc789d85cc5632256ad42d
> --
> 2.32.0
>


Re: [PATCH] drm/amdgpu: fix kernel-doc warnings on non-kernel-doc comments

2021-08-09 Thread Alex Deucher
On Sat, Aug 7, 2021 at 7:38 PM Randy Dunlap  wrote:
>
> Don't use "begin kernel-doc notation" (/**) for comments that are
> not kernel-doc. This eliminates warnings reported by the 0day bot.
>
> drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c:89: warning: This comment starts with 
> '/**', but isn't a kernel-doc comment. Refer 
> Documentation/doc-guide/kernel-doc.rst
> * This shader is used to clear VGPRS and LDS, and also write the input
> drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c:209: warning: This comment starts 
> with '/**', but isn't a kernel-doc comment. Refer 
> Documentation/doc-guide/kernel-doc.rst
> * The below shaders are used to clear SGPRS, and also write the input
> drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c:301: warning: This comment starts 
> with '/**', but isn't a kernel-doc comment. Refer 
> Documentation/doc-guide/kernel-doc.rst
> * This shader is used to clear the uninitiated sgprs after the above
>
> Fixes: 0e0036c7d13b ("drm/amdgpu: fix no full coverage issue for gprs 
> initialization")
> Signed-off-by: Randy Dunlap 
> Reported-by: kernel test robot 
> Cc: Alex Deucher 
> Cc: Christian König 
> Cc: "Pan, Xinhui" 
> Cc: Dennis Li 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c |6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> --- linux-next-20210806.orig/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c
> +++ linux-next-20210806/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c
> @@ -85,7 +85,7 @@ static const struct soc15_reg_golden gol
> SOC15_REG_GOLDEN_VALUE(GC, 0, regTCI_CNTL_3, 0xff, 0x20),
>  };
>
> -/**
> +/*
>   * This shader is used to clear VGPRS and LDS, and also write the input
>   * pattern into the write back buffer, which will be used by driver to
>   * check whether all SIMDs have been covered.
> @@ -206,7 +206,7 @@ const struct soc15_reg_entry vgpr_init_r
> { SOC15_REG_ENTRY(GC, 0, regCOMPUTE_STATIC_THREAD_MGMT_SE7), 
> 0x },
>  };
>
> -/**
> +/*
>   * The below shaders are used to clear SGPRS, and also write the input
>   * pattern into the write back buffer. The first two dispatch should be
>   * scheduled simultaneously which make sure that all SGPRS could be
> @@ -302,7 +302,7 @@ const struct soc15_reg_entry sgpr96_init
> { SOC15_REG_ENTRY(GC, 0, regCOMPUTE_STATIC_THREAD_MGMT_SE7), 
> 0x },
>  };
>
> -/**
> +/*
>   * This shader is used to clear the uninitiated sgprs after the above
>   * two dispatches, because of hardware feature, dispath 0 couldn't clear
>   * top hole sgprs. Therefore need 4 waves per SIMD to cover these sgprs


Re: [PATCH] drm/amd/display: use do-while-0 for DC_TRACE_LEVEL_MESSAGE()

2021-08-09 Thread Alex Deucher
On Sun, Aug 8, 2021 at 10:52 PM Randy Dunlap  wrote:
>
> Building with W=1 complains about an empty 'else' statement, so use the
> usual do-nothing-while-0 loop to quieten this warning.
>
> ../drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dmub_psr.c:113:53: warning: 
> suggest braces around empty body in an 'else' statement [-Wempty-body]
>   113 | *state, retry_count);
>
> Fixes: b30eda8d416c ("drm/amd/display: Add ETW log to dmub_psr_get_state")
> Signed-off-by: Randy Dunlap 
> Cc: Wyatt Wood 
> Cc: Alex Deucher 
> Cc: Christian König 
> Cc: "Pan, Xinhui" 
> Cc: Harry Wentland 
> Cc: Leo Li 
> Cc: amd-gfx@lists.freedesktop.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: David Airlie 
> Cc: Daniel Vetter 

Applied.  Thanks!

Alex

> ---
>  drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- linux-next-20210806.orig/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
> +++ linux-next-20210806/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
> @@ -29,7 +29,7 @@
>  #include "dmub/dmub_srv.h"
>  #include "core_types.h"
>
> -#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
> +#define DC_TRACE_LEVEL_MESSAGE(...)do {} while (0) /* do nothing */
>
>  #define MAX_PIPES 6
>


Re: [PATCHv2 2/2] drm/amd/amdgpu: add tdr support for embeded hw_fence

2021-08-09 Thread Andrey Grodzovsky



On 2021-08-05 4:31 a.m., Jingwen Chen wrote:

[Why]
After embeded hw_fence to amdgpu_job, we need to add tdr support
for this feature.

[How]
1. Add a resubmit_flag for resubmit jobs.
2. Clear job fence from RCU and force complete vm flush fences in
pre_asic_reset
3. skip dma_fence_get for resubmit jobs and add a dma_fence_put
for guilty jobs.
v2:
use a job_run_counter in amdgpu_job to replace the resubmit_flag in
drm_sched_job. When the job_run_counter >= 1, it means this job is a
resubmit job.

Signed-off-by: Jack Zhang 
Signed-off-by: Jingwen Chen 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 +++-
  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c  | 13 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c|  5 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h|  3 +++
  4 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 9e53ff851496..ade2fa07a50a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -4447,7 +4447,7 @@ int amdgpu_device_mode1_reset(struct amdgpu_device *adev)
  int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
 struct amdgpu_reset_context *reset_context)
  {
-   int i, r = 0;
+   int i, j, r = 0;
struct amdgpu_job *job = NULL;
bool need_full_reset =
test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
@@ -4471,6 +4471,16 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device 
*adev,
if (!ring || !ring->sched.thread)
continue;
  
+		/*clear job fence from fence drv to avoid force_completion

+*leave NULL and vm flush fence in fence drv */
+   for (j = 0; j <= ring->fence_drv.num_fences_mask; j ++) {
+   struct dma_fence *old,**ptr;
+   ptr = &ring->fence_drv.fences[j];
+   old = rcu_dereference_protected(*ptr, 1);
+   if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, 
&old->flags)) {
+   RCU_INIT_POINTER(*ptr, NULL);
+   }
+   }
/* after all hw jobs are reset, hw fence is meaningless, so 
force_completion */
amdgpu_fence_driver_force_completion(ring);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5e29d797a265..c9752cf794fb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -159,10 +159,15 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct 
dma_fence **f, struct amd
}
  
  	seq = ++ring->fence_drv.sync_seq;

-   dma_fence_init(fence, &amdgpu_fence_ops,
-  &ring->fence_drv.lock,
-  adev->fence_context + ring->idx,
-  seq);
+   if (job != NULL && job->job_run_counter) {
+   /* reinit seq for resubmitted jobs */
+   fence->seqno = seq;
+   } else {
+   dma_fence_init(fence, &amdgpu_fence_ops,
+   &ring->fence_drv.lock,
+   adev->fence_context + ring->idx,
+   seq);
+   }



I think this should be in the first patch actually (and the counter too),
without it the first patch is buggy.


  
  	if (job != NULL) {

/* mark this fence has a parent job */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 65a395060de2..19b13a65c73b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -254,6 +254,7 @@ static struct dma_fence *amdgpu_job_run(struct 
drm_sched_job *sched_job)
dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if 
VRAM lost */
  
  	if (finished->error < 0) {

+   dma_fence_put(&job->hw_fence);



Would put this check bellow with the job_run_counter check



DRM_INFO("Skip scheduling IBs!\n");
} else {
r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job,
@@ -262,7 +263,9 @@ static struct dma_fence *amdgpu_job_run(struct 
drm_sched_job *sched_job)
DRM_ERROR("Error scheduling IBs (%d)\n", r);
}
  
-	dma_fence_get(fence);

+   if (!job->job_run_counter)
+   dma_fence_get(fence);
+   job->job_run_counter ++;
amdgpu_job_free_resources(job);



Here you  modify code you already changed in patch 1, looks to me
like those 2 patches should be squashed into one patch as the changes are
directly dependent and it's hard to follow

Andrey


  
  	fence = r ? ERR_PTR(r) : fence;

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index 92324c978534..1fa667f2

Re: Req: about Polaris with RKL platform

2021-08-09 Thread Alex Deucher
On Mon, Aug 9, 2021 at 9:59 AM Koba Ko  wrote:
>
> Previously, AMD had an issue about noise  with AMD-DG on the RKL platform
> AMD provided a parameter.
> #modprobe amdgpu ppfeaturemask=0xfff7bffb
>
>  I thought it's better to check and assign values in amd gpu.
> Have a trouble determining the type of pch(RKL or else),
> search in amd drm driver and can't find any about this.
> Would someone please guide me? if there's an existing function.
>
> here's a proposal, check RKL PCH in amd driver,
> the pch definitions must be splitted off from intel_pch.h in i915
> folder to include/drm/intel_pch_definition.h

Yes, something like that would work.

Alex


>
> > --- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> > @@ -1629,7 +1629,7 @@ static void smu7_init_dpm_defaults(struct pp_hwmgr 
> > *hwmgr)
> >
> > data->mclk_dpm_key_disabled = hwmgr->feature_mask & 
> > PP_MCLK_DPM_MASK ? false : true;
> > data->sclk_dpm_key_disabled = hwmgr->feature_mask & 
> > PP_SCLK_DPM_MASK ? false : true;
> > -   data->pcie_dpm_key_disabled = hwmgr->feature_mask & 
> > PP_PCIE_DPM_MASK ? false : true;
> > +   data->pcie_dpm_key_disabled = is_rkl_pch() || !(hwmgr->feature_mask 
> > & PP_PCIE_DPM_MASK);
> > /* need to set voltage control types before EVV patching */
> > data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE;
> > data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;


Fwd: Req: about Polaris with RKL platform

2021-08-09 Thread Koba Ko
Previously, AMD had an issue about noise  with AMD-DG on the RKL platform
AMD provided a parameter.
#modprobe amdgpu ppfeaturemask=0xfff7bffb

 I thought it's better to check and assign values in amd gpu.
Have a trouble determining the type of pch(RKL or else),
search in amd drm driver and can't find any about this.
Would someone please guide me? if there's an existing function.

here's a proposal, check RKL PCH in amd driver,
the pch definitions must be splitted off from intel_pch.h in i915
folder to include/drm/intel_pch_definition.h

> --- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
> @@ -1629,7 +1629,7 @@ static void smu7_init_dpm_defaults(struct pp_hwmgr 
> *hwmgr)
>
> data->mclk_dpm_key_disabled = hwmgr->feature_mask & PP_MCLK_DPM_MASK 
> ? false : true;
> data->sclk_dpm_key_disabled = hwmgr->feature_mask & PP_SCLK_DPM_MASK 
> ? false : true;
> -   data->pcie_dpm_key_disabled = hwmgr->feature_mask & PP_PCIE_DPM_MASK 
> ? false : true;
> +   data->pcie_dpm_key_disabled = is_rkl_pch() || !(hwmgr->feature_mask & 
> PP_PCIE_DPM_MASK);
> /* need to set voltage control types before EVV patching */
> data->voltage_control = SMU7_VOLTAGE_CONTROL_NONE;
> data->vddci_control = SMU7_VOLTAGE_CONTROL_NONE;


Patch "drm/amdgpu: fix checking pmops when PM_SLEEP is not enabled" has been added to the 5.13-stable tree

2021-08-09 Thread gregkh


This is a note to let you know that I've just added the patch titled

drm/amdgpu: fix checking pmops when PM_SLEEP is not enabled

to the 5.13-stable tree which can be found at:

http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
 drm-amdgpu-fix-checking-pmops-when-pm_sleep-is-not-enabled.patch
and it can be found in the queue-5.13 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let  know about it.


>From 5706cb3c910cc8283f344bc37a889a8d523a2c6d Mon Sep 17 00:00:00 2001
From: Randy Dunlap 
Date: Thu, 29 Jul 2021 20:03:47 -0700
Subject: drm/amdgpu: fix checking pmops when PM_SLEEP is not enabled
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Randy Dunlap 

commit 5706cb3c910cc8283f344bc37a889a8d523a2c6d upstream.

'pm_suspend_target_state' is only available when CONFIG_PM_SLEEP
is set/enabled. OTOH, when both SUSPEND and HIBERNATION are not set,
PM_SLEEP is not set, so this variable cannot be used.

../drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c: In function 
‘amdgpu_acpi_is_s0ix_active’:
../drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c:1046:11: error: 
‘pm_suspend_target_state’ undeclared (first use in this function); did you mean 
‘__KSYM_pm_suspend_target_state’?
return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;
   ^~~
   __KSYM_pm_suspend_target_state

Also use shorter IS_ENABLED(CONFIG_foo) notation for checking the
2 config symbols.

Fixes: 91e273712ab8dd ("drm/amdgpu: Check pmops for desired suspend state")
Signed-off-by: Randy Dunlap 
Cc: Alex Deucher 
Cc: Christian König 
Cc: "Pan, Xinhui" 
Cc: amd-gfx@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Cc: linux-n...@vger.kernel.org
Signed-off-by: Alex Deucher 
Cc: sta...@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
@@ -904,7 +904,7 @@ void amdgpu_acpi_fini(struct amdgpu_devi
  */
 bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
 {
-#if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE)
+#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP)
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
if (adev->flags & AMD_IS_APU)
return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;


Patches currently in stable-queue which might be from rdun...@infradead.org are

queue-5.13/drm-i915-fix-i915_globals_exit-section-mismatch-erro.patch
queue-5.13/drm-amdgpu-fix-checking-pmops-when-pm_sleep-is-not-enabled.patch


[PATCH] drm/amdgpu: Removed unnecessary if statement

2021-08-09 Thread Sergio Miguéns Iglesias
There was an "if" statement that did nothing so it was removed.

Signed-off-by: Sergio Miguéns Iglesias 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
index 09b048647523..5eb3869d029e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
@@ -273,9 +273,6 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
return 0;
 
 out:
-   if (abo) {
-
-   }
if (fb && ret) {
drm_gem_object_put(gobj);
drm_framebuffer_unregister_private(fb);
-- 
2.32.0



RE: [PATCH 00/13] DC Patches Aug 6, 2021

2021-08-09 Thread Wheeler, Daniel
[Public]

Hi all,
 
This week this patchset was tested on the following systems:
 
HP Envy 360, with Ryzen 5 4500U, with the following display types: eDP 1080p 
60hz, 4k 60hz  (via USB-C to DP/HDMI), 1440p 144hz (via USB-C to DP/HDMI), 
1680*1050 60hz (via USB-C to DP and then DP to DVI/VGA)
 
AMD Ryzen 9 5900H, with the following display types: eDP 1080p 60hz, 4k 60hz  
(via USB-C to DP/HDMI), 1440p 144hz (via USB-C to DP/HDMI), 1680*1050 60hz (via 
USB-C to DP and then DP to DVI/VGA)
 
Sapphire Pulse RX5700XT with the following display types:
4k 60hz  (via DP/HDMI), 1440p 144hz (via DP/HDMI), 1680*1050 60hz (via DP to 
DVI/VGA)
 
Reference AMD RX6800 with the following display types:
4k 60hz  (via DP/HDMI and USB-C to DP/HDMI), 1440p 144hz (via USB-C to DP/HDMI 
and USB-C to DP/HDMI), 1680*1050 60hz (via DP to DVI/VGA)
 
Included testing using a Startech DP 1.4 MST hub at 2x 4k 60hz, and 3x 1080p 
60hz on all systems.
 
 
Tested-by: Daniel Wheeler 
 
 
Thank you,
 
Dan Wheeler
Technologist  |  AMD
SW Display
--
1 Commerce Valley Dr E, Thornhill, ON L3T 7X6
Facebook |  Twitter |  amd.com  

-Original Message-
From: amd-gfx  On Behalf Of Anson Jacob
Sent: August 6, 2021 12:35 PM
To: amd-gfx@lists.freedesktop.org
Cc: Wentland, Harry ; Li, Sun peng (Leo) 
; Lakha, Bhawanpreet ; Siqueira, 
Rodrigo ; Pillai, Aurabindo 
; Zhuo, Qingqing ; 
eryk.b...@amd.com; R, Bindu ; Jacob, Anson 

Subject: [PATCH 00/13] DC Patches Aug 6, 2021

This DC patchset brings improvements in multiple areas. In summary, we 
highlight:
- Fix memory allocation in dm IRQ context to use GFP_ATOMIC
- Increase timeout threshold for DMCUB reset
- Clear GPINT after DMCUB has reset
- Add AUX I2C tracing
- Fix code commenting style
- Some refactoring
- Remove invalid assert for ODM + MPC case

Anson Jacob (1):
  drm/amd/display: use GFP_ATOMIC in amdgpu_dm_irq_schedule_work

Anthony Koo (2):
  drm/amd/display: [FW Promotion] Release 0.0.78
  drm/amd/display: 3.2.148

Ashley Thomas (1):
  drm/amd/display: Add AUX I2C tracing.

Eric Bernstein (1):
  drm/amd/display: Remove invalid assert for ODM + MPC case

Nicholas Kazlauskas (2):
  drm/amd/display: Clear GPINT after DMCUB has reset
  drm/amd/display: Increase timeout threshold for DMCUB reset

Roy Chan (5):
  drm/amd/display: fix missing writeback disablement if plane is removed
  drm/amd/display: refactor the codes to centralize the stream/pipe
checking logic
  drm/amd/display: refactor the cursor programing codes
  drm/amd/display: fix incorrect CM/TF programming sequence in dwb
  drm/amd/display: Correct comment style

Wenjing Liu (1):
  drm/amd/display: add authentication_complete in hdcp output

 .../drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c |   2 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c  |  62 --
 .../gpu/drm/amd/display/dc/core/dc_stream.c   | 106 ++
 drivers/gpu/drm/amd/display/dc/dc.h   |   2 +-
 drivers/gpu/drm/amd/display/dc/dce/dce_aux.c  | 192 +-
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c|  14 +-
 .../drm/amd/display/dc/dcn30/dcn30_dwb_cm.c   |  90 +---
 .../drm/amd/display/dc/dcn30/dcn30_hwseq.c|  12 +-
 .../drm/amd/display/dc/dcn30/dcn30_resource.c |   1 -
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |   6 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn31.c |  18 +-
 .../gpu/drm/amd/display/modules/hdcp/hdcp.c   |   5 +-
 .../gpu/drm/amd/display/modules/hdcp/hdcp.h   |   8 +
 .../display/modules/hdcp/hdcp1_transition.c   |   8 +-
 .../display/modules/hdcp/hdcp2_transition.c   |   4 +-
 .../drm/amd/display/modules/hdcp/hdcp_log.c   |  74 +++
 .../drm/amd/display/modules/hdcp/hdcp_log.h   |  72 ---
 .../drm/amd/display/modules/inc/mod_hdcp.h|   1 +
 18 files changed, 479 insertions(+), 198 deletions(-)

-- 
2.25.1


Re: [PATCH v4 13/27] drm/mediatek: Don't set struct drm_device.irq_enabled

2021-08-09 Thread Chun-Kuang Hu
Hi, Thomas:

Thomas Zimmermann  於 2021年6月25日 週五 下午4:22寫道:
>
> The field drm_device.irq_enabled is only used by legacy drivers
> with userspace modesetting. Don't set it in mediatek.
>

Acked-by: Chun-Kuang Hu 

> Signed-off-by: Thomas Zimmermann 
> Reviewed-by: Laurent Pinchart 
> Acked-by: Daniel Vetter 
> ---
>  drivers/gpu/drm/mediatek/mtk_drm_drv.c | 6 --
>  1 file changed, 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c 
> b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
> index b46bdb8985da..9b60bec33d3b 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
> @@ -270,12 +270,6 @@ static int mtk_drm_kms_init(struct drm_device *drm)
> goto err_component_unbind;
> }
>
> -   /*
> -* We don't use the drm_irq_install() helpers provided by the DRM
> -* core, so we need to set this manually in order to allow the
> -* DRM_IOCTL_WAIT_VBLANK to operate correctly.
> -*/
> -   drm->irq_enabled = true;
> ret = drm_vblank_init(drm, MAX_CRTC);
> if (ret < 0)
> goto err_component_unbind;
> --
> 2.32.0
>


RE: [PATCH] drm/amd/amdgpu: skip locking delayed work if not initialized.

2021-08-09 Thread Deng, Emily
[AMD Official Use Only]

Reviewed-by: Emily.Deng 

>-Original Message-
>From: amd-gfx  On Behalf Of
>YuBiao Wang
>Sent: Thursday, August 5, 2021 10:38 AM
>To: amd-gfx@lists.freedesktop.org
>Cc: Grodzovsky, Andrey ; Quan, Evan
>; Chen, Horace ; Tuikov,
>Luben ; Koenig, Christian
>; Deucher, Alexander
>; Xiao, Jack ; Zhang,
>Hawking ; Liu, Monk ; Xu,
>Feifei ; Wang, Kevin(Yang) ;
>Wang, YuBiao 
>Subject: [PATCH] drm/amd/amdgpu: skip locking delayed work if not
>initialized.
>
>When init failed in early init stage, amdgpu_object has not been initialized,
>so hasn't the ttm delayed queue functions.
>
>Signed-off-by: YuBiao Wang 
>---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>index 9e53ff851496..4c33985542ed 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>@@ -3825,7 +3825,8 @@ void amdgpu_device_fini_hw(struct
>amdgpu_device *adev)  {
>   dev_info(adev->dev, "amdgpu: finishing device.\n");
>   flush_delayed_work(&adev->delayed_init_work);
>-  ttm_bo_lock_delayed_workqueue(&adev->mman.bdev);
>+  if (adev->mman.initialized)
>+  ttm_bo_lock_delayed_workqueue(&adev->mman.bdev);
>   adev->shutdown = true;
>
>   /* make sure IB test finished before entering exclusive mode
>--
>2.25.1



[PATCH] drm/amdgpu: Add MB_REQ_MSG_READY_TO_RESET response when VF get FLR notification.

2021-08-09 Thread Peng Ju Zhou
From: Jiange Zhao 

When guest received FLR notification from host, it would
lock adapter into reset state. There will be no more
job submission and hardware access after that.

Then it should send a response to host that it has prepared
for host reset.

Signed-off-by: Jiange Zhao 
Signed-off-by: Peng Ju Zhou 
---
 drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c | 2 ++
 drivers/gpu/drm/amd/amdgpu/mxgpu_nv.h | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c 
b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
index b48e68f46a5c..a35e6d87e537 100644
--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
@@ -287,6 +287,8 @@ static void xgpu_nv_mailbox_flr_work(struct work_struct 
*work)
amdgpu_virt_fini_data_exchange(adev);
atomic_set(&adev->in_gpu_reset, 1);
 
+   xgpu_nv_mailbox_trans_msg(adev, IDH_READY_TO_RESET, 0, 0, 0);
+
do {
if (xgpu_nv_mailbox_peek_msg(adev) == IDH_FLR_NOTIFICATION_CMPL)
goto flr_done;
diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.h 
b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.h
index 9f5808616174..73887b0aa1d6 100644
--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.h
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_nv.h
@@ -37,7 +37,8 @@ enum idh_request {
IDH_REQ_GPU_RESET_ACCESS,
IDH_REQ_GPU_INIT_DATA,
 
-   IDH_LOG_VF_ERROR   = 200,
+   IDH_LOG_VF_ERROR= 200,
+   IDH_READY_TO_RESET  = 201,
 };
 
 enum idh_event {
-- 
2.17.1



Re: [PATCHv2 1/2] drm/amd/amdgpu embed hw_fence into amdgpu_job

2021-08-09 Thread Jingwen Chen
On Mon Aug 09, 2021 at 10:18:37AM +0800, Jingwen Chen wrote:
> On Fri Aug 06, 2021 at 11:48:04AM +0200, Christian König wrote:
> > 
> > 
> > Am 06.08.21 um 07:52 schrieb Jingwen Chen:
> > > On Thu Aug 05, 2021 at 05:13:22PM -0400, Andrey Grodzovsky wrote:
> > > > On 2021-08-05 4:31 a.m., Jingwen Chen wrote:
> > > > > From: Jack Zhang 
> > > > > 
> > > > > Why: Previously hw fence is alloced separately with job.
> > > > > It caused historical lifetime issues and corner cases.
> > > > > The ideal situation is to take fence to manage both job
> > > > > and fence's lifetime, and simplify the design of gpu-scheduler.
> > > > > 
> > > > > How:
> > > > > We propose to embed hw_fence into amdgpu_job.
> > > > > 1. We cover the normal job submission by this method.
> > > > > 2. For ib_test, and submit without a parent job keep the
> > > > > legacy way to create a hw fence separately.
> > > > > v2:
> > > > > use AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT to show that the fence is
> > > > > embeded in a job.
> > > > > 
> > > > > Signed-off-by: Jingwen Chen 
> > > > > Signed-off-by: Jack Zhang 
> > > > > ---
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c  |  1 -
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 63 
> > > > > -
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  |  2 +-
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 35 
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  4 +-
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h|  5 +-
> > > > >drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  |  2 +-
> > > > >8 files changed, 84 insertions(+), 30 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c 
> > > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
> > > > > index 7b46ba551cb2..3003ee1c9487 100644
> > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
> > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
> > > > > @@ -714,7 +714,6 @@ int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, 
> > > > > enum kgd_engine_type engine,
> > > > >   ret = dma_fence_wait(f, false);
> > > > >err_ib_sched:
> > > > > - dma_fence_put(f);
> > > > >   amdgpu_job_free(job);
> > > > >err:
> > > > >   return ret;
> > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
> > > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > > index 536005bff24a..277128846dd1 100644
> > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > > > > @@ -1414,7 +1414,7 @@ static void 
> > > > > amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
> > > > >   continue;
> > > > >   }
> > > > >   job = to_amdgpu_job(s_job);
> > > > > - if (preempted && job->fence == fence)
> > > > > + if (preempted && (&job->hw_fence) == fence)
> > > > >   /* mark the job as preempted */
> > > > >   job->preemption_status |= AMDGPU_IB_PREEMPTED;
> > > > >   }
> > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
> > > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > > > > index 7495911516c2..5e29d797a265 100644
> > > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > > > > @@ -129,30 +129,46 @@ static u32 amdgpu_fence_read(struct amdgpu_ring 
> > > > > *ring)
> > > > > *
> > > > > * @ring: ring the fence is associated with
> > > > > * @f: resulting fence object
> > > > > + * @job: job the fence is embeded in
> > > > > * @flags: flags to pass into the subordinate .emit_fence() call
> > > > > *
> > > > > * Emits a fence command on the requested ring (all asics).
> > > > > * Returns 0 on success, -ENOMEM on failure.
> > > > > */
> > > > > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > > > > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence 
> > > > > **f, struct amdgpu_job *job,
> > > > > unsigned flags)
> > > > >{
> > > > >   struct amdgpu_device *adev = ring->adev;
> > > > > - struct amdgpu_fence *fence;
> > > > > + struct dma_fence *fence;
> > > > > + struct amdgpu_fence *am_fence;
> > > > >   struct dma_fence __rcu **ptr;
> > > > >   uint32_t seq;
> > > > >   int r;
> > > > > - fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_KERNEL);
> > > > > - if (fence == NULL)
> > > > > - return -ENOMEM;
> > > > > + if (job == NULL) {
> > > > > + /* create a sperate hw fence */
> > > > > + am_fence = kmem_cache_alloc(amdgpu_fence_slab, 
> > > > > GFP_ATOMIC);
> > > > > + if (am_fence == NULL)
> > > > > + return -ENOMEM;
> > > > > + fence = &am_fence->base;
> > > > > + 

[PATCH] drm/amd/display: use do-while-0 for DC_TRACE_LEVEL_MESSAGE()

2021-08-09 Thread Randy Dunlap
Building with W=1 complains about an empty 'else' statement, so use the
usual do-nothing-while-0 loop to quieten this warning.

../drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dmub_psr.c:113:53: warning: 
suggest braces around empty body in an 'else' statement [-Wempty-body]
  113 | *state, retry_count);

Fixes: b30eda8d416c ("drm/amd/display: Add ETW log to dmub_psr_get_state")
Signed-off-by: Randy Dunlap 
Cc: Wyatt Wood 
Cc: Alex Deucher 
Cc: Christian König 
Cc: "Pan, Xinhui" 
Cc: Harry Wentland 
Cc: Leo Li 
Cc: amd-gfx@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Cc: David Airlie 
Cc: Daniel Vetter 
---
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-next-20210806.orig/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+++ linux-next-20210806/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
@@ -29,7 +29,7 @@
 #include "dmub/dmub_srv.h"
 #include "core_types.h"
 
-#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
+#define DC_TRACE_LEVEL_MESSAGE(...)do {} while (0) /* do nothing */
 
 #define MAX_PIPES 6
 


[PATCH] drm/amdgpu: fix kernel-doc warnings on non-kernel-doc comments

2021-08-09 Thread Randy Dunlap
Don't use "begin kernel-doc notation" (/**) for comments that are
not kernel-doc. This eliminates warnings reported by the 0day bot.

drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c:89: warning: This comment starts with 
'/**', but isn't a kernel-doc comment. Refer 
Documentation/doc-guide/kernel-doc.rst
* This shader is used to clear VGPRS and LDS, and also write the input
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c:209: warning: This comment starts with 
'/**', but isn't a kernel-doc comment. Refer 
Documentation/doc-guide/kernel-doc.rst
* The below shaders are used to clear SGPRS, and also write the input
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c:301: warning: This comment starts with 
'/**', but isn't a kernel-doc comment. Refer 
Documentation/doc-guide/kernel-doc.rst
* This shader is used to clear the uninitiated sgprs after the above

Fixes: 0e0036c7d13b ("drm/amdgpu: fix no full coverage issue for gprs 
initialization")
Signed-off-by: Randy Dunlap 
Reported-by: kernel test robot 
Cc: Alex Deucher 
Cc: Christian König 
Cc: "Pan, Xinhui" 
Cc: Dennis Li 
Cc: amd-gfx@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c |6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- linux-next-20210806.orig/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c
+++ linux-next-20210806/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c
@@ -85,7 +85,7 @@ static const struct soc15_reg_golden gol
SOC15_REG_GOLDEN_VALUE(GC, 0, regTCI_CNTL_3, 0xff, 0x20),
 };
 
-/**
+/*
  * This shader is used to clear VGPRS and LDS, and also write the input
  * pattern into the write back buffer, which will be used by driver to
  * check whether all SIMDs have been covered.
@@ -206,7 +206,7 @@ const struct soc15_reg_entry vgpr_init_r
{ SOC15_REG_ENTRY(GC, 0, regCOMPUTE_STATIC_THREAD_MGMT_SE7), 0x 
},
 };
 
-/**
+/*
  * The below shaders are used to clear SGPRS, and also write the input
  * pattern into the write back buffer. The first two dispatch should be
  * scheduled simultaneously which make sure that all SGPRS could be
@@ -302,7 +302,7 @@ const struct soc15_reg_entry sgpr96_init
{ SOC15_REG_ENTRY(GC, 0, regCOMPUTE_STATIC_THREAD_MGMT_SE7), 0x 
},
 };
 
-/**
+/*
  * This shader is used to clear the uninitiated sgprs after the above
  * two dispatches, because of hardware feature, dispath 0 couldn't clear
  * top hole sgprs. Therefore need 4 waves per SIMD to cover these sgprs


VA-API Regression in Kernel 5.13 for RX 6700 XT

2021-08-09 Thread Wyatt Childers

Hi,

I've encountered a bug 
 as a user of 
Fedora, that's also mirrored by this 
 Arch Linux bug report. The vast 
majority of VA-API hardware video decoders have disappeared for the RX 
6700 XT GPU.


It seems this happened in the 5.13.x branch, and that it's still a 
problem in 5.14.x (at least Fedora's packaging of 5.14 RC4).


Please let me know if there's any more information I can provide, or if 
this was the wrong place to reach out.


Thanks,

Wyatt



Re: [PATCH 00/11] Implement generic prot_guest_has() helper function

2021-08-09 Thread Kuppuswamy, Sathyanarayanan

Hi Tom,

On 7/27/21 3:26 PM, Tom Lendacky wrote:

This patch series provides a generic helper function, prot_guest_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new protected virtualization technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them.


With this patch set, select ARCH_HAS_PROTECTED_GUEST and set
CONFIG_AMD_MEM_ENCRYPT=n, creates following error.

ld: arch/x86/mm/ioremap.o: in function `early_memremap_is_setup_data':
arch/x86/mm/ioremap.c:672: undefined reference to `early_memremap_decrypted'

It looks like early_memremap_is_setup_data() is not protected with
appropriate config.




Cc: Andi Kleen 
Cc: Andy Lutomirski 
Cc: Ard Biesheuvel 
Cc: Baoquan He 
Cc: Benjamin Herrenschmidt 
Cc: Borislav Petkov 
Cc: Christian Borntraeger 
Cc: Daniel Vetter 
Cc: Dave Hansen 
Cc: Dave Young 
Cc: David Airlie 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Joerg Roedel 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Thomas Zimmermann 
Cc: Vasily Gorbik 
Cc: VMware Graphics 
Cc: Will Deacon 

---

Patches based on:
   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
   commit 79e920060fa7 ("Merge branch 'WIP/fixes'")

Tom Lendacky (11):
   mm: Introduce a function to check for virtualization protection
 features
   x86/sev: Add an x86 version of prot_guest_has()
   powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
   x86/sme: Replace occurrences of sme_active() with prot_guest_has()
   x86/sev: Replace occurrences of sev_active() with prot_guest_has()
   x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
   treewide: Replace the use of mem_encrypt_active() with
 prot_guest_has()
   mm: Remove the now unused mem_encrypt_active() function
   x86/sev: Remove the now unused mem_encrypt_active() function
   powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
 function
   s390/mm: Remove the now unused mem_encrypt_active() function

  arch/Kconfig   |  3 ++
  arch/powerpc/include/asm/mem_encrypt.h |  5 --
  arch/powerpc/include/asm/protected_guest.h | 30 +++
  arch/powerpc/platforms/pseries/Kconfig |  1 +
  arch/s390/include/asm/mem_encrypt.h|  2 -
  arch/x86/Kconfig   |  1 +
  arch/x86/include/asm/kexec.h   |  2 +-
  arch/x86/include/asm/mem_encrypt.h | 13 +
  arch/x86/include/asm/protected_guest.h | 27 ++
  arch/x86/kernel/crash_dump_64.c|  4 +-
  arch/x86/kernel/head64.c   |  4 +-
  arch/x86/kernel/kvm.c  |  3 +-
  arch/x86/kernel/kvmclock.c |  4 +-
  arch/x86/kernel/machine_kexec_64.c | 19 +++
  arch/x86/kernel/pci-swiotlb.c  |  9 ++--
  arch/x86/kernel/relocate_kernel_64.S   |  2 +-
  arch/x86/kernel/sev.c  |  6 +--
  arch/x86/kvm/svm/svm.c |  3 +-
  arch/x86/mm/ioremap.c  | 16 +++---
  arch/x86/mm/mem_encrypt.c  | 60 +++---
  arch/x86/mm/mem_encrypt_identity.c |  3 +-
  arch/x86/mm/pat/set_memory.c   |  3 +-
  arch/x86/platform/efi/efi_64.c |  9 ++--
  arch/x86/realmode/init.c   |  8 +--
  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 +-
  drivers/gpu/drm/drm_cache.c|  4 +-
  drivers/gpu/drm/vmwgfx/vmwgfx_drv.c|  4 +-
  drivers/gpu/drm/vmwgfx/vmwgfx_msg.c|  6 +--
  drivers/iommu/amd/init.c   |  7 +--
  drivers/iommu/amd/iommu.c  |  3 +-
  drivers/iommu/amd/iommu_v2.c   |  3 +-
  drivers/iommu/iommu.c  |  3 +-
  fs/proc/vmcore.c   |  6 +--
  include/linux/mem_encrypt.h|  4 --
  include/linux/protected_guest.h| 37 +
  kernel/dma/swiotlb.c   |  4 +-
  36 files changed, 218 insertions(+), 104 deletions(-)
  create mode 100644 arch/powerpc/include/asm/protected_guest.h
  create mode 100644 arch/x86/include/asm/protected_guest.h
  create mode 100644 include/linux/protected_guest.h



--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer


[PATCH] drm/amd/display/dc/dce112/dce112_resource: Drop redundant null-pointer check in bw_calcs_data_update_from_pplib()

2021-08-09 Thread Tuo Li
The variable dc->bw_vbios is guaranteed to be not NULL by its callers.
Thus the null-pointer check can be dropped.

Reported-by: TOTE Robot 
Signed-off-by: Tuo Li 
---
 drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
index ee55cda854bf..3fc8e6b664f9 100644
--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
@@ -1064,7 +1064,7 @@ static void bw_calcs_data_update_from_pplib(struct dc *dc)
struct dm_pp_clock_levels clks = {0};
int memory_type_multiplier = MEMORY_TYPE_MULTIPLIER_CZ;
 
-   if (dc->bw_vbios && dc->bw_vbios->memory_type == bw_def_hbm)
+   if (dc->bw_vbios->memory_type == bw_def_hbm)
memory_type_multiplier = MEMORY_TYPE_HBM;
 
/*do system clock  TODO PPLIB: after PPLIB implement,
-- 
2.25.1