Re: AMDGPU error: "[drm:amdgpu_dm_atomic_commit_tail [amdgpu]] *ERROR* Waiting for fences timed out!"

2021-06-30 Thread Alex Deucher
On Wed, Jun 30, 2021 at 8:35 AM Ketsui  wrote:
>
> >I could be wrong. I can't remember what marketing names map to what
> >asics. I can tell if you can get your dmesg output.
> Here it is.

Thanks.  It is a picasso system.

Alex

> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v2] drm/amdkfd: Only apply TLB flush optimization on ALdebaran

2021-06-30 Thread Eric Huang
It is based on reverting two patches back.
  drm/amdkfd: Make TLB flush conditional on mapping
  drm/amdgpu: Add table_freed parameter to amdgpu_vm_bo_update

Signed-off-by: Eric Huang 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index a6c01d624246..13fcef6836d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1845,6 +1845,12 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
true);
ret = unreserve_bo_and_vms(, false, false);
 
+   /* Only apply no TLB flush on Aldebaran to
+* workaround regressions on other Asics.
+*/
+   if (table_freed && (adev->asic_type != CHIP_ALDEBARAN))
+   *table_freed = true;
+
goto out;
 
 out_unreserve:
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3] drm/amdgpu: Restore msix after FLR

2021-06-30 Thread Alex Deucher
On Wed, Jun 30, 2021 at 10:34 PM Zhou, Peng Ju  wrote:
>
> [AMD Official Use Only]
>
> Hi Alex
> The function amdgpu_restore_msix is used for reset the msix during board 
> reset(sriov reset or asic reset), it moves from host to guest, so I think a 
> flag to indicate if msix enabled is not needed.
>

The function ultimately enables MSIX.  What if it was not enabled in
the first place?

Alex

>
> --
> BW
> Pengju Zhou
>
>
>
> > -Original Message-
> > From: Alex Deucher 
> > Sent: Tuesday, June 29, 2021 10:28 PM
> > To: Zhou, Peng Ju 
> > Cc: amd-gfx list ; Deng, Emily
> > 
> > Subject: Re: [PATCH v3] drm/amdgpu: Restore msix after FLR
> >
> > On Fri, Jun 25, 2021 at 2:44 AM Peng Ju Zhou  wrote:
> > >
> > > From: "Emily.Deng" 
> > >
> > > After FLR, the msix will be cleared, so need to re-enable it.
> >
> > Do we need to store whether we enabled msix in the first place and then
> > decide whether to enable it again in this case?
> >
> > Alex
> >
> > >
> > > Signed-off-by: Emily.Deng 
> > > Signed-off-by: Peng Ju Zhou 
> > > ---
> > >  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c | 11 +++
> > >  1 file changed, 11 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> > > index 90f50561b43a..26e63cb5d8d5 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
> > > @@ -277,6 +277,16 @@ static bool amdgpu_msi_ok(struct amdgpu_device
> > *adev)
> > > return true;
> > >  }
> > >
> > > +void amdgpu_restore_msix(struct amdgpu_device *adev) {
> > > +   u16 ctrl;
> > > +
> > > +   pci_read_config_word(adev->pdev, adev->pdev->msix_cap +
> > PCI_MSIX_FLAGS, );
> > > +   ctrl &= ~PCI_MSIX_FLAGS_ENABLE;
> > > +   pci_write_config_word(adev->pdev, adev->pdev->msix_cap +
> > PCI_MSIX_FLAGS, ctrl);
> > > +   ctrl |= PCI_MSIX_FLAGS_ENABLE;
> > > +   pci_write_config_word(adev->pdev, adev->pdev->msix_cap +
> > > +PCI_MSIX_FLAGS, ctrl); }
> > >  /**
> > >   * amdgpu_irq_init - initialize interrupt handling
> > >   *
> > > @@ -558,6 +568,7 @@ void amdgpu_irq_gpu_reset_resume_helper(struct
> > > amdgpu_device *adev)  {
> > > int i, j, k;
> > >
> > > +   amdgpu_restore_msix(adev);
> > > for (i = 0; i < AMDGPU_IRQ_CLIENTID_MAX; ++i) {
> > > if (!adev->irq.client[i].sources)
> > > continue;
> > > --
> > > 2.17.1
> > >
> > > ___
> > > amd-gfx mailing list
> > > amd-gfx@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> > > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-
> > gfxdata=04%7C01%7CPe
> > >
> > ngJu.Zhou%40amd.com%7C409895f80e0d43ecba3808d93b0a15fc%7C3dd8961
> > fe4884
> > >
> > e608e11a82d994e183d%7C0%7C0%7C637605736778199787%7CUnknown%7C
> > TWFpbGZsb
> > >
> > 3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%
> > 3D%
> > >
> > 7C1000sdata=w%2FgDzhoAjDraAMiyfx3XTPxx1QNff3OY%2BZWn1NYq%
> > 2Ffo%3D&
> > > amp;reserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdkfd: Only apply TLB flush optimization on ALdebaran

2021-06-30 Thread Felix Kuehling

On 2021-06-30 6:03 p.m., Eric Huang wrote:

It is based on reverting two patches back.
   drm/amdkfd: Make TLB flush conditional on mapping
   drm/amdgpu: Add table_freed parameter to amdgpu_vm_bo_update

Signed-off-by: Eric Huang 


I'd prefer to put HW-specific workarounds in 
amdgpu_amdkfd_gpuvm_map_memory_to_gpu.


Regards,
  Felix



---
  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index cfaa5f88e630..f859ee7e8c13 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1659,6 +1659,12 @@ static int kfd_ioctl_map_memory_to_gpu(struct file 
*filep,
goto sync_memory_failed;
}
  
+	/* Only apply no TLB flush on Aldebaran to

+* workaround regressions on other Asics
+*/
+   if (dev->device_info->asic_family != CHIP_ALDEBARAN)
+   table_freed = true;
+
/* Flush TLBs after waiting for the page table updates to complete */
if (table_freed) {
for (i = 0; i < args->n_devices; i++) {

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdkfd: Only apply TLB flush optimization on ALdebaran

2021-06-30 Thread Eric Huang
It is based on reverting two patches back.
  drm/amdkfd: Make TLB flush conditional on mapping
  drm/amdgpu: Add table_freed parameter to amdgpu_vm_bo_update

Signed-off-by: Eric Huang 
---
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c 
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index cfaa5f88e630..f859ee7e8c13 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1659,6 +1659,12 @@ static int kfd_ioctl_map_memory_to_gpu(struct file 
*filep,
goto sync_memory_failed;
}
 
+   /* Only apply no TLB flush on Aldebaran to
+* workaround regressions on other Asics
+*/
+   if (dev->device_info->asic_family != CHIP_ALDEBARAN)
+   table_freed = true;
+
/* Flush TLBs after waiting for the page table updates to complete */
if (table_freed) {
for (i = 0; i < args->n_devices; i++) {
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 2/2] drm/amdgpu: separate out vm pasid assignment

2021-06-30 Thread Felix Kuehling

On 2021-06-29 11:19 a.m., Nirmoy Das wrote:

Use new helper function amdgpu_vm_set_pasid() to
assign vm pasid value. This also ensures that we don't free
a pasid from vm code as pasids are allocated somewhere else.

Signed-off-by: Nirmoy Das 


Christian's comments notwithstanding, the series is

Acked-by: Felix Kuehling 



---
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 13 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   | 10 +--
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c| 28 +++
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h|  4 +--
  4 files changed, 26 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index f96598279593..3a2ac7f66bbd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1287,11 +1287,22 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct 
kgd_dev *kgd,
if (avm->process_info)
return -EINVAL;
  
+	/* Free the original amdgpu allocated pasid,

+* will be replaced with kfd allocated pasid.
+*/
+   if (avm->pasid) {
+   amdgpu_pasid_free(avm->pasid);
+   amdgpu_vm_set_pasid(adev, avm, 0);
+   }
+
/* Convert VM into a compute VM */
-   ret = amdgpu_vm_make_compute(adev, avm, pasid);
+   ret = amdgpu_vm_make_compute(adev, avm);
if (ret)
return ret;
  
+	ret = amdgpu_vm_set_pasid(adev, avm, pasid);

+   if (ret)
+   return ret;
/* Initialize KFD part of the VM and process info */
ret = init_kfd_vm(avm, process_info, ef);
if (ret)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index cbb932f97355..66bf8b0c56bb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -1178,10 +1178,14 @@ int amdgpu_driver_open_kms(struct drm_device *dev, 
struct drm_file *file_priv)
pasid = 0;
}
  
-	r = amdgpu_vm_init(adev, >vm, pasid);

+   r = amdgpu_vm_init(adev, >vm);
if (r)
goto error_pasid;
  
+	r = amdgpu_vm_set_pasid(adev, >vm, pasid);

+   if (r)
+   goto error_vm;
+
fpriv->prt_va = amdgpu_vm_bo_add(adev, >vm, NULL);
if (!fpriv->prt_va) {
r = -ENOMEM;
@@ -1209,8 +1213,10 @@ int amdgpu_driver_open_kms(struct drm_device *dev, 
struct drm_file *file_priv)
amdgpu_vm_fini(adev, >vm);
  
  error_pasid:

-   if (pasid)
+   if (pasid) {
amdgpu_pasid_free(pasid);
+   amdgpu_vm_set_pasid(adev, >vm, 0);
+   }
  
  	kfree(fpriv);
  
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index fd92ff27931a..1dccede6387e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2899,14 +2899,13 @@ long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long 
timeout)
   *
   * @adev: amdgpu_device pointer
   * @vm: requested vm
- * @pasid: Process address space identifier
   *
   * Init @vm fields.
   *
   * Returns:
   * 0 for success, error for failure.
   */
-int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, u32 pasid)
+int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
  {
struct amdgpu_bo *root_bo;
struct amdgpu_bo_vm *root;
@@ -2980,10 +2979,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct 
amdgpu_vm *vm, u32 pasid)
  
  	amdgpu_bo_unreserve(vm->root.bo);
  
-	r = amdgpu_vm_set_pasid(adev, vm, pasid);

-   if (r)
-   goto error_free_root;
-
INIT_KFIFO(vm->faults);
  
  	return 0;

@@ -3039,7 +3034,6 @@ static int amdgpu_vm_check_clean_reserved(struct 
amdgpu_device *adev,
   *
   * @adev: amdgpu_device pointer
   * @vm: requested vm
- * @pasid: pasid to use
   *
   * This only works on GFX VMs that don't have any BOs added and no
   * page tables allocated yet.
@@ -3047,7 +3041,6 @@ static int amdgpu_vm_check_clean_reserved(struct 
amdgpu_device *adev,
   * Changes the following VM parameters:
   * - use_cpu_for_update
   * - pte_supports_ats
- * - pasid (old PASID is released, because compute manages its own PASIDs)
   *
   * Reinitializes the page directory to reflect the changed ATS
   * setting.
@@ -3055,8 +3048,7 @@ static int amdgpu_vm_check_clean_reserved(struct 
amdgpu_device *adev,
   * Returns:
   * 0 for success, -errno for errors.
   */
-int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm,
-  u32 pasid)
+int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
  {
bool pte_support_ats = (adev->asic_type == CHIP_RAVEN);
int r;
@@ -3070,16 +3062,6 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, 
struct amdgpu_vm *vm,
if (r)
goto 

Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread Luben Tuikov
On 2021-06-30 6:10 a.m., YuBiao Wang wrote:
> [Why]
> GPU timing counters are read via KIQ under sriov, which will introduce
> a delay.
>
> [How]
> It could be directly read by MMIO.
>
> v2: Add additional check to prevent carryover issue.
> v3: Only check for carryover for once to prevent performance issue.
> v4: Add comments of the rough frequency where carryover happens.
>
> Signed-off-by: YuBiao Wang 
> Acked-by: Horace Chen 
> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index ff7e9f49040e..9355494002a1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
>  
>  static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
>  {
> - uint64_t clock;
> + uint64_t clock, clock_lo, clock_hi, hi_check;
>  
>   amdgpu_gfx_off_ctrl(adev, false);
>   mutex_lock(>gfx.gpu_clock_mutex);
> @@ -7620,8 +7620,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
> amdgpu_device *adev)
>   ((uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
>   break;
>   default:
> - clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER) |
> - ((uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);
> + clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER);
> + hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + /* Carryover happens every 4 Giga time cycles counts which is 
> roughly 42 secs */

I'd rather have put the clock frequency here, rather than some interpretation 
thereof.
This would make this maintainable in the future should the clock frequency 
change.
"4 Giga time cycles" isn't a standard expression.

Something like:
"The GFX clock frequency is ..., which sets 32-bit carry over with frequency 42 
seconds."

It'll also allow anyone to check the math.

Regards,
Luben

> + if (hi_check != clock_hi) {
> + clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER);
> + clock_hi = hi_check;
> + }
> + clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
>   break;
>   }
>   mutex_unlock(>gfx.gpu_clock_mutex);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amdgpu: fix 64 bit divide in eeprom code

2021-06-30 Thread Luben Tuikov
Reviewed-by: Luben Tuikov 

On 2021-06-30 11:25 a.m., Alex Deucher wrote:
> pos is 64 bits.
>
> Fixes: 5e98eafa90ca19 ("drm/amdgpu: RAS EEPROM table is now in debugfs")
> Cc: luben.tui...@amd.com
> Reported-by: kernel test robot 
> Signed-off-by: Alex Deucher 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c | 14 +-
>  1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
> index 677e379f5fb5..fc70620369e4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
> @@ -880,13 +880,17 @@ static ssize_t amdgpu_ras_debugfs_table_read(struct 
> file *f, char __user *buf,
>   if (*pos < data_len && size > 0) {
>   u8 dare[RAS_TABLE_RECORD_SIZE];
>   u8 data[rec_hdr_fmt_size + 1];
> + struct eeprom_table_record record;
> + int s, r;
> +
>   /* Find the starting record index
>*/
> - int s = (*pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
> -  strlen(rec_hdr_str)) / rec_hdr_fmt_size;
> - int r = (*pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
> -  strlen(rec_hdr_str)) % rec_hdr_fmt_size;
> - struct eeprom_table_record record;
> + s = *pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
> + strlen(rec_hdr_str);
> + s = s / rec_hdr_fmt_size;
> + r = *pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
> + strlen(rec_hdr_str);
> + r = r % rec_hdr_fmt_size;
>  
>   for ( ; size > 0 && s < control->ras_num_recs; s++) {
>   u32 ai = RAS_RI_TO_AI(control, s);

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: fix 64 bit divide in eeprom code

2021-06-30 Thread Alex Deucher
pos is 64 bits.

Fixes: 5e98eafa90ca19 ("drm/amdgpu: RAS EEPROM table is now in debugfs")
Cc: luben.tui...@amd.com
Reported-by: kernel test robot 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
index 677e379f5fb5..fc70620369e4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
@@ -880,13 +880,17 @@ static ssize_t amdgpu_ras_debugfs_table_read(struct file 
*f, char __user *buf,
if (*pos < data_len && size > 0) {
u8 dare[RAS_TABLE_RECORD_SIZE];
u8 data[rec_hdr_fmt_size + 1];
+   struct eeprom_table_record record;
+   int s, r;
+
/* Find the starting record index
 */
-   int s = (*pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
-strlen(rec_hdr_str)) / rec_hdr_fmt_size;
-   int r = (*pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
-strlen(rec_hdr_str)) % rec_hdr_fmt_size;
-   struct eeprom_table_record record;
+   s = *pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
+   strlen(rec_hdr_str);
+   s = s / rec_hdr_fmt_size;
+   r = *pos - strlen(tbl_hdr_str) - tbl_hdr_fmt_size -
+   strlen(rec_hdr_str);
+   r = r % rec_hdr_fmt_size;
 
for ( ; size > 0 && s < control->ras_num_recs; s++) {
u32 ai = RAS_RI_TO_AI(control, s);
-- 
2.31.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH] drm/amd/display: Extend DMUB diagnostic logging to DCN3.1

2021-06-30 Thread Harry Wentland
On 2021-06-30 10:07 a.m., Nicholas Kazlauskas wrote:
> [Why & How]
> Extend existing support for DCN2.1 DMUB diagnostic logging to
> DCN3.1 so we can collect useful information if the DMUB hangs.
> 
> Signed-off-by: Nicholas Kazlauskas 

Reviewed-by: Harry Wentland 

Harry

> ---
>  .../gpu/drm/amd/display/dmub/src/dmub_dcn31.c | 60 +++
>  .../gpu/drm/amd/display/dmub/src/dmub_dcn31.h | 16 -
>  .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |  5 +-
>  3 files changed, 76 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c 
> b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
> index 8c886ece71..973de34641 100644
> --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
> +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
> @@ -352,3 +352,63 @@ uint32_t dmub_dcn31_get_current_time(struct dmub_srv 
> *dmub)
>  {
>   return REG_READ(DMCUB_TIMER_CURRENT);
>  }
> +
> +void dmub_dcn31_get_diagnostic_data(struct dmub_srv *dmub, struct 
> dmub_diagnostic_data *diag_data)
> +{
> + uint32_t is_dmub_enabled, is_soft_reset, is_sec_reset;
> + uint32_t is_traceport_enabled, is_cw0_enabled, is_cw6_enabled;
> +
> + if (!dmub || !diag_data)
> + return;
> +
> + memset(diag_data, 0, sizeof(*diag_data));
> +
> + diag_data->dmcub_version = dmub->fw_version;
> +
> + diag_data->scratch[0] = REG_READ(DMCUB_SCRATCH0);
> + diag_data->scratch[1] = REG_READ(DMCUB_SCRATCH1);
> + diag_data->scratch[2] = REG_READ(DMCUB_SCRATCH2);
> + diag_data->scratch[3] = REG_READ(DMCUB_SCRATCH3);
> + diag_data->scratch[4] = REG_READ(DMCUB_SCRATCH4);
> + diag_data->scratch[5] = REG_READ(DMCUB_SCRATCH5);
> + diag_data->scratch[6] = REG_READ(DMCUB_SCRATCH6);
> + diag_data->scratch[7] = REG_READ(DMCUB_SCRATCH7);
> + diag_data->scratch[8] = REG_READ(DMCUB_SCRATCH8);
> + diag_data->scratch[9] = REG_READ(DMCUB_SCRATCH9);
> + diag_data->scratch[10] = REG_READ(DMCUB_SCRATCH10);
> + diag_data->scratch[11] = REG_READ(DMCUB_SCRATCH11);
> + diag_data->scratch[12] = REG_READ(DMCUB_SCRATCH12);
> + diag_data->scratch[13] = REG_READ(DMCUB_SCRATCH13);
> + diag_data->scratch[14] = REG_READ(DMCUB_SCRATCH14);
> + diag_data->scratch[15] = REG_READ(DMCUB_SCRATCH15);
> +
> + diag_data->undefined_address_fault_addr = 
> REG_READ(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR);
> + diag_data->inst_fetch_fault_addr = 
> REG_READ(DMCUB_INST_FETCH_FAULT_ADDR);
> + diag_data->data_write_fault_addr = 
> REG_READ(DMCUB_DATA_WRITE_FAULT_ADDR);
> +
> + diag_data->inbox1_rptr = REG_READ(DMCUB_INBOX1_RPTR);
> + diag_data->inbox1_wptr = REG_READ(DMCUB_INBOX1_WPTR);
> + diag_data->inbox1_size = REG_READ(DMCUB_INBOX1_SIZE);
> +
> + diag_data->inbox0_rptr = REG_READ(DMCUB_INBOX0_RPTR);
> + diag_data->inbox0_wptr = REG_READ(DMCUB_INBOX0_WPTR);
> + diag_data->inbox0_size = REG_READ(DMCUB_INBOX0_SIZE);
> +
> + REG_GET(DMCUB_CNTL, DMCUB_ENABLE, _dmub_enabled);
> + diag_data->is_dmcub_enabled = is_dmub_enabled;
> +
> + REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, _soft_reset);
> + diag_data->is_dmcub_soft_reset = is_soft_reset;
> +
> + REG_GET(DMCUB_SEC_CNTL, DMCUB_SEC_RESET_STATUS, _sec_reset);
> + diag_data->is_dmcub_secure_reset = is_sec_reset;
> +
> + REG_GET(DMCUB_CNTL, DMCUB_TRACEPORT_EN, _traceport_enabled);
> + diag_data->is_traceport_en  = is_traceport_enabled;
> +
> + REG_GET(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE, 
> _cw0_enabled);
> + diag_data->is_cw0_enabled = is_cw0_enabled;
> +
> + REG_GET(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE, 
> _cw6_enabled);
> + diag_data->is_cw6_enabled = is_cw6_enabled;
> +}
> diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h 
> b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
> index 2829c3e9a3..9456a6a2d5 100644
> --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
> +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
> @@ -36,6 +36,9 @@ struct dmub_srv;
>   DMUB_SR(DMCUB_CNTL) \
>   DMUB_SR(DMCUB_CNTL2) \
>   DMUB_SR(DMCUB_SEC_CNTL) \
> + DMUB_SR(DMCUB_INBOX0_SIZE) \
> + DMUB_SR(DMCUB_INBOX0_RPTR) \
> + DMUB_SR(DMCUB_INBOX0_WPTR) \
>   DMUB_SR(DMCUB_INBOX1_BASE_ADDRESS) \
>   DMUB_SR(DMCUB_INBOX1_SIZE) \
>   DMUB_SR(DMCUB_INBOX1_RPTR) \
> @@ -103,11 +106,15 @@ struct dmub_srv;
>   DMUB_SR(DMCUB_SCRATCH14) \
>   DMUB_SR(DMCUB_SCRATCH15) \
>   DMUB_SR(DMCUB_GPINT_DATAIN1) \
> + DMUB_SR(DMCUB_GPINT_DATAOUT) \
>   DMUB_SR(CC_DC_PIPE_DIS) \
>   DMUB_SR(MMHUBBUB_SOFT_RESET) \
>   DMUB_SR(DCN_VM_FB_LOCATION_BASE) \
>   DMUB_SR(DCN_VM_FB_OFFSET) \
> - DMUB_SR(DMCUB_TIMER_CURRENT)
> + DMUB_SR(DMCUB_TIMER_CURRENT) \
> + DMUB_SR(DMCUB_INST_FETCH_FAULT_ADDR) \
> + DMUB_SR(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR) \
> + DMUB_SR(DMCUB_DATA_WRITE_FAULT_ADDR)
>  
>  #define DMUB_DCN31_FIELDS() \
> 

[PATCH] drm/amd/display: Extend DMUB diagnostic logging to DCN3.1

2021-06-30 Thread Nicholas Kazlauskas
[Why & How]
Extend existing support for DCN2.1 DMUB diagnostic logging to
DCN3.1 so we can collect useful information if the DMUB hangs.

Signed-off-by: Nicholas Kazlauskas 
---
 .../gpu/drm/amd/display/dmub/src/dmub_dcn31.c | 60 +++
 .../gpu/drm/amd/display/dmub/src/dmub_dcn31.h | 16 -
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |  5 +-
 3 files changed, 76 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c 
b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
index 8c886ece71..973de34641 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
@@ -352,3 +352,63 @@ uint32_t dmub_dcn31_get_current_time(struct dmub_srv *dmub)
 {
return REG_READ(DMCUB_TIMER_CURRENT);
 }
+
+void dmub_dcn31_get_diagnostic_data(struct dmub_srv *dmub, struct 
dmub_diagnostic_data *diag_data)
+{
+   uint32_t is_dmub_enabled, is_soft_reset, is_sec_reset;
+   uint32_t is_traceport_enabled, is_cw0_enabled, is_cw6_enabled;
+
+   if (!dmub || !diag_data)
+   return;
+
+   memset(diag_data, 0, sizeof(*diag_data));
+
+   diag_data->dmcub_version = dmub->fw_version;
+
+   diag_data->scratch[0] = REG_READ(DMCUB_SCRATCH0);
+   diag_data->scratch[1] = REG_READ(DMCUB_SCRATCH1);
+   diag_data->scratch[2] = REG_READ(DMCUB_SCRATCH2);
+   diag_data->scratch[3] = REG_READ(DMCUB_SCRATCH3);
+   diag_data->scratch[4] = REG_READ(DMCUB_SCRATCH4);
+   diag_data->scratch[5] = REG_READ(DMCUB_SCRATCH5);
+   diag_data->scratch[6] = REG_READ(DMCUB_SCRATCH6);
+   diag_data->scratch[7] = REG_READ(DMCUB_SCRATCH7);
+   diag_data->scratch[8] = REG_READ(DMCUB_SCRATCH8);
+   diag_data->scratch[9] = REG_READ(DMCUB_SCRATCH9);
+   diag_data->scratch[10] = REG_READ(DMCUB_SCRATCH10);
+   diag_data->scratch[11] = REG_READ(DMCUB_SCRATCH11);
+   diag_data->scratch[12] = REG_READ(DMCUB_SCRATCH12);
+   diag_data->scratch[13] = REG_READ(DMCUB_SCRATCH13);
+   diag_data->scratch[14] = REG_READ(DMCUB_SCRATCH14);
+   diag_data->scratch[15] = REG_READ(DMCUB_SCRATCH15);
+
+   diag_data->undefined_address_fault_addr = 
REG_READ(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR);
+   diag_data->inst_fetch_fault_addr = 
REG_READ(DMCUB_INST_FETCH_FAULT_ADDR);
+   diag_data->data_write_fault_addr = 
REG_READ(DMCUB_DATA_WRITE_FAULT_ADDR);
+
+   diag_data->inbox1_rptr = REG_READ(DMCUB_INBOX1_RPTR);
+   diag_data->inbox1_wptr = REG_READ(DMCUB_INBOX1_WPTR);
+   diag_data->inbox1_size = REG_READ(DMCUB_INBOX1_SIZE);
+
+   diag_data->inbox0_rptr = REG_READ(DMCUB_INBOX0_RPTR);
+   diag_data->inbox0_wptr = REG_READ(DMCUB_INBOX0_WPTR);
+   diag_data->inbox0_size = REG_READ(DMCUB_INBOX0_SIZE);
+
+   REG_GET(DMCUB_CNTL, DMCUB_ENABLE, _dmub_enabled);
+   diag_data->is_dmcub_enabled = is_dmub_enabled;
+
+   REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, _soft_reset);
+   diag_data->is_dmcub_soft_reset = is_soft_reset;
+
+   REG_GET(DMCUB_SEC_CNTL, DMCUB_SEC_RESET_STATUS, _sec_reset);
+   diag_data->is_dmcub_secure_reset = is_sec_reset;
+
+   REG_GET(DMCUB_CNTL, DMCUB_TRACEPORT_EN, _traceport_enabled);
+   diag_data->is_traceport_en  = is_traceport_enabled;
+
+   REG_GET(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE, 
_cw0_enabled);
+   diag_data->is_cw0_enabled = is_cw0_enabled;
+
+   REG_GET(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE, 
_cw6_enabled);
+   diag_data->is_cw6_enabled = is_cw6_enabled;
+}
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h 
b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
index 2829c3e9a3..9456a6a2d5 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.h
@@ -36,6 +36,9 @@ struct dmub_srv;
DMUB_SR(DMCUB_CNTL) \
DMUB_SR(DMCUB_CNTL2) \
DMUB_SR(DMCUB_SEC_CNTL) \
+   DMUB_SR(DMCUB_INBOX0_SIZE) \
+   DMUB_SR(DMCUB_INBOX0_RPTR) \
+   DMUB_SR(DMCUB_INBOX0_WPTR) \
DMUB_SR(DMCUB_INBOX1_BASE_ADDRESS) \
DMUB_SR(DMCUB_INBOX1_SIZE) \
DMUB_SR(DMCUB_INBOX1_RPTR) \
@@ -103,11 +106,15 @@ struct dmub_srv;
DMUB_SR(DMCUB_SCRATCH14) \
DMUB_SR(DMCUB_SCRATCH15) \
DMUB_SR(DMCUB_GPINT_DATAIN1) \
+   DMUB_SR(DMCUB_GPINT_DATAOUT) \
DMUB_SR(CC_DC_PIPE_DIS) \
DMUB_SR(MMHUBBUB_SOFT_RESET) \
DMUB_SR(DCN_VM_FB_LOCATION_BASE) \
DMUB_SR(DCN_VM_FB_OFFSET) \
-   DMUB_SR(DMCUB_TIMER_CURRENT)
+   DMUB_SR(DMCUB_TIMER_CURRENT) \
+   DMUB_SR(DMCUB_INST_FETCH_FAULT_ADDR) \
+   DMUB_SR(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR) \
+   DMUB_SR(DMCUB_DATA_WRITE_FAULT_ADDR)
 
 #define DMUB_DCN31_FIELDS() \
DMUB_SF(DMCUB_CNTL, DMCUB_ENABLE) \
@@ -115,6 +122,7 @@ struct dmub_srv;
DMUB_SF(DMCUB_CNTL2, DMCUB_SOFT_RESET) \
DMUB_SF(DMCUB_SEC_CNTL, DMCUB_SEC_RESET) \

Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread Christian König

Am 30.06.21 um 13:17 schrieb Liu, Monk:

[AMD Official Use Only]


And a preempt_enable(); here. This way the critical section is also not 
interrupted by a task switch.

Do you mean put a "preempt_disable()" here ?


No? We need to disable preemption before the critical section and enable 
it again when we are done.


Or is the code called under a spinlock or similar? In that case this 
would be superfluous.


Anyway it is rather unlikely that the task is not scheduled again for 
the next 42 seconds.


Christian.



Thanks

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Christian König 
Sent: Wednesday, June 30, 2021 7:15 PM
To: Wang, YuBiao ; amd-gfx@lists.freedesktop.org
Cc: Grodzovsky, Andrey ; Xiao, Jack ; Xu, Feifei ; Chen, 
Horace ; Wang, Kevin(Yang) ; Tuikov, Luben ; Deucher, Alexander 
; Quan, Evan ; Koenig, Christian ; Liu, Monk 
; Zhang, Hawking 
Subject: Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce 
delay (v4)

Am 30.06.21 um 12:10 schrieb YuBiao Wang:

[Why]
GPU timing counters are read via KIQ under sriov, which will introduce
a delay.

[How]
It could be directly read by MMIO.

v2: Add additional check to prevent carryover issue.
v3: Only check for carryover for once to prevent performance issue.
v4: Add comments of the rough frequency where carryover happens.

Signed-off-by: YuBiao Wang 
Acked-by: Horace Chen 
---
   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
   1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ff7e9f49040e..9355494002a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
   
   static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)

   {
-   uint64_t clock;
+   uint64_t clock, clock_lo, clock_hi, hi_check;
   
   	amdgpu_gfx_off_ctrl(adev, false);

mutex_lock(>gfx.gpu_clock_mutex);
@@ -7620,8 +7620,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
amdgpu_device *adev)
((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
break;
default:
-   clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER) |
-   ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) 
<< 32ULL);

If you want to be extra sure you could add a preempt_disable(); here.


+   clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   /* Carryover happens every 4 Giga time cycles counts which is 
roughly 42 secs */
+   if (hi_check != clock_hi) {
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   clock_hi = hi_check;
+   }

And a preempt_enable(); here. This way the critical section is also not 
interrupted by a task switch.

But either way the patch is Reviewed-by: Christian König 


Regards,
Christian.


+   clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
break;
}
mutex_unlock(>gfx.gpu_clock_mutex);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread Liu, Monk
[AMD Official Use Only]

>> And a preempt_enable(); here. This way the critical section is also not 
>> interrupted by a task switch.

Do you mean put a "preempt_disable()" here ? 

Thanks 

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Christian König  
Sent: Wednesday, June 30, 2021 7:15 PM
To: Wang, YuBiao ; amd-gfx@lists.freedesktop.org
Cc: Grodzovsky, Andrey ; Xiao, Jack 
; Xu, Feifei ; Chen, Horace 
; Wang, Kevin(Yang) ; Tuikov, Luben 
; Deucher, Alexander ; Quan, 
Evan ; Koenig, Christian ; Liu, 
Monk ; Zhang, Hawking 
Subject: Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce 
delay (v4)

Am 30.06.21 um 12:10 schrieb YuBiao Wang:
> [Why]
> GPU timing counters are read via KIQ under sriov, which will introduce 
> a delay.
>
> [How]
> It could be directly read by MMIO.
>
> v2: Add additional check to prevent carryover issue.
> v3: Only check for carryover for once to prevent performance issue.
> v4: Add comments of the rough frequency where carryover happens.
>
> Signed-off-by: YuBiao Wang 
> Acked-by: Horace Chen 
> ---
>   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
>   1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index ff7e9f49040e..9355494002a1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
>   
>   static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
>   {
> - uint64_t clock;
> + uint64_t clock, clock_lo, clock_hi, hi_check;
>   
>   amdgpu_gfx_off_ctrl(adev, false);
>   mutex_lock(>gfx.gpu_clock_mutex);
> @@ -7620,8 +7620,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
> amdgpu_device *adev)
>   ((uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
>   break;
>   default:
> - clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER) |
> - ((uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);

If you want to be extra sure you could add a preempt_disable(); here.

> + clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER);
> + hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + /* Carryover happens every 4 Giga time cycles counts which is 
> roughly 42 secs */
> + if (hi_check != clock_hi) {
> + clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER);
> + clock_hi = hi_check;
> + }

And a preempt_enable(); here. This way the critical section is also not 
interrupted by a task switch.

But either way the patch is Reviewed-by: Christian König 


Regards,
Christian.

> + clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
>   break;
>   }
>   mutex_unlock(>gfx.gpu_clock_mutex);
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread Liu, Monk
[AMD Official Use Only]

reviewed-by: Monk Liu 
Thanks 

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: YuBiao Wang  
Sent: Wednesday, June 30, 2021 6:10 PM
To: amd-gfx@lists.freedesktop.org
Cc: Grodzovsky, Andrey ; Quan, Evan 
; Chen, Horace ; Tuikov, Luben 
; Koenig, Christian ; Deucher, 
Alexander ; Xiao, Jack ; Zhang, 
Hawking ; Liu, Monk ; Xu, Feifei 
; Wang, Kevin(Yang) ; Wang, YuBiao 

Subject: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay 
(v4)

[Why]
GPU timing counters are read via KIQ under sriov, which will introduce a delay.

[How]
It could be directly read by MMIO.

v2: Add additional check to prevent carryover issue.
v3: Only check for carryover for once to prevent performance issue.
v4: Add comments of the rough frequency where carryover happens.

Signed-off-by: YuBiao Wang 
Acked-by: Horace Chen 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ff7e9f49040e..9355494002a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
 
 static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)  {
-   uint64_t clock;
+   uint64_t clock, clock_lo, clock_hi, hi_check;
 
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(>gfx.gpu_clock_mutex);
@@ -7620,8 +7620,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
amdgpu_device *adev)
((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
break;
default:
-   clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER) |
-   ((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);
+   clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   /* Carryover happens every 4 Giga time cycles counts which is 
roughly 42 secs */
+   if (hi_check != clock_hi) {
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   clock_hi = hi_check;
+   }
+   clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
break;
}
mutex_unlock(>gfx.gpu_clock_mutex);
--
2.25.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 15/17] drm/uAPI: Move "Broadcast RGB" property from driver specific to general context

2021-06-30 Thread Werner Sembach
Add "Broadcast RGB" to general drm context so that more drivers besides
i915 and gma500 can implement it without duplicating code.

Userspace can use this property to tell the graphic driver to use full or
limited color range for a given connector, overwriting the default
behaviour/automatic detection.

Possible options are:
- Automatic (default/current behaviour)
- Full
- Limited 16:235

In theory the driver should be able to automatically detect the monitors
capabilities, but because of flawed standard implementations in Monitors,
this might fail. In this case a manual overwrite is required to not have
washed out colors or lose details in very dark or bright scenes.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/drm_atomic_helper.c |  4 +++
 drivers/gpu/drm/drm_atomic_uapi.c   |  4 +++
 drivers/gpu/drm/drm_connector.c | 46 +
 include/drm/drm_connector.h | 16 ++
 4 files changed, 70 insertions(+)

diff --git a/drivers/gpu/drm/drm_atomic_helper.c 
b/drivers/gpu/drm/drm_atomic_helper.c
index 90d62f305257..0c89d32efbd0 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -691,6 +691,10 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
if (old_connector_state->preferred_color_format !=
new_connector_state->preferred_color_format)
new_crtc_state->connectors_changed = true;
+
+   if (old_connector_state->preferred_color_range !=
+   new_connector_state->preferred_color_range)
+   new_crtc_state->connectors_changed = true;
}
 
if (funcs->atomic_check)
diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
b/drivers/gpu/drm/drm_atomic_uapi.c
index c536f5e22016..c589bb1a8163 100644
--- a/drivers/gpu/drm/drm_atomic_uapi.c
+++ b/drivers/gpu/drm/drm_atomic_uapi.c
@@ -798,6 +798,8 @@ static int drm_atomic_connector_set_property(struct 
drm_connector *connector,
state->max_requested_bpc = val;
} else if (property == connector->preferred_color_format_property) {
state->preferred_color_format = val;
+   } else if (property == connector->preferred_color_range_property) {
+   state->preferred_color_range = val;
} else if (connector->funcs->atomic_set_property) {
return connector->funcs->atomic_set_property(connector,
state, property, val);
@@ -877,6 +879,8 @@ drm_atomic_connector_get_property(struct drm_connector 
*connector,
*val = state->max_requested_bpc;
} else if (property == connector->preferred_color_format_property) {
*val = state->preferred_color_format;
+   } else if (property == connector->preferred_color_range_property) {
+   *val = state->preferred_color_range;
} else if (connector->funcs->atomic_get_property) {
return connector->funcs->atomic_get_property(connector,
state, property, val);
diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 67dd59b12258..20ae2e6d907b 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -905,6 +905,12 @@ static const struct drm_prop_enum_list 
drm_active_color_format_enum_list[] = {
{ DRM_COLOR_FORMAT_YCRCB420, "ycbcr420" },
 };
 
+static const struct drm_prop_enum_list drm_preferred_color_range_enum_list[] = 
{
+   { DRM_MODE_COLOR_RANGE_UNSET, "Automatic" },
+   { DRM_MODE_COLOR_RANGE_FULL, "Full" },
+   { DRM_MODE_COLOR_RANGE_LIMITED_16_235, "Limited 16:235" },
+};
+
 static const struct drm_prop_enum_list drm_active_color_range_enum_list[] = {
{ DRM_MODE_COLOR_RANGE_UNSET, "Not Applicable" },
{ DRM_MODE_COLOR_RANGE_FULL, "Full" },
@@ -1246,6 +1252,15 @@ static const struct drm_prop_enum_list dp_colorspaces[] 
= {
  * property. Possible values are "not applicable", "rgb", "ycbcr444",
  * "ycbcr422", and "ycbcr420".
  *
+ * Broadcast RGB:
+ * This property is used by userspace to change the used color range. This
+ * property affects the RGB color format as well as the Y'CbCr color
+ * formats, if the driver supports both full and limited Y'CbCr. Drivers to
+ * use the function drm_connector_attach_preferred_color_format_property()
+ * to create and attach the property to the connector during
+ * initialization. Possible values are "Automatic", "Full", and "Limited
+ * 16:235".
+ *
  * active color range:
  * This read-only property tells userspace the color range actually used by
  * the hardware display engine "on the cable" on a connector. The chosen
@@ -2331,6 +2346,37 @@ void 
drm_connector_set_active_color_format_property(struct drm_connector *connec
 }
 

[PATCH v5 16/17] drm/i915/display: Use the general "Broadcast RGB" implementation

2021-06-30 Thread Werner Sembach
Change from the i915 specific "Broadcast RGB" drm property implementation
to the general one.

This commit delete all traces of the former "Broadcast RGB" implementation
and add a new one using the new driver agnoistic functions an variables.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/i915/display/intel_atomic.c   |  8 --
 .../gpu/drm/i915/display/intel_connector.c| 28 ---
 .../gpu/drm/i915/display/intel_connector.h|  1 -
 .../drm/i915/display/intel_display_types.h|  8 --
 drivers/gpu/drm/i915/display/intel_dp.c   |  9 ++
 drivers/gpu/drm/i915/display/intel_dp_mst.c   |  6 +++-
 drivers/gpu/drm/i915/display/intel_hdmi.c |  8 ++
 drivers/gpu/drm/i915/display/intel_sdvo.c |  2 +-
 drivers/gpu/drm/i915/i915_drv.h   |  1 -
 9 files changed, 12 insertions(+), 59 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c 
b/drivers/gpu/drm/i915/display/intel_atomic.c
index b4e7ac51aa31..f8d5a0e287b0 100644
--- a/drivers/gpu/drm/i915/display/intel_atomic.c
+++ b/drivers/gpu/drm/i915/display/intel_atomic.c
@@ -63,8 +63,6 @@ int intel_digital_connector_atomic_get_property(struct 
drm_connector *connector,
 
if (property == dev_priv->force_audio_property)
*val = intel_conn_state->force_audio;
-   else if (property == dev_priv->broadcast_rgb_property)
-   *val = intel_conn_state->broadcast_rgb;
else {
drm_dbg_atomic(_priv->drm,
   "Unknown property [PROP:%d:%s]\n",
@@ -99,11 +97,6 @@ int intel_digital_connector_atomic_set_property(struct 
drm_connector *connector,
return 0;
}
 
-   if (property == dev_priv->broadcast_rgb_property) {
-   intel_conn_state->broadcast_rgb = val;
-   return 0;
-   }
-
drm_dbg_atomic(_priv->drm, "Unknown property [PROP:%d:%s]\n",
   property->base.id, property->name);
return -EINVAL;
@@ -134,7 +127,6 @@ int intel_digital_connector_atomic_check(struct 
drm_connector *conn,
 * up in a modeset.
 */
if (new_conn_state->force_audio != old_conn_state->force_audio ||
-   new_conn_state->broadcast_rgb != old_conn_state->broadcast_rgb ||
new_conn_state->base.colorspace != old_conn_state->base.colorspace 
||
new_conn_state->base.picture_aspect_ratio != 
old_conn_state->base.picture_aspect_ratio ||
new_conn_state->base.content_type != 
old_conn_state->base.content_type ||
diff --git a/drivers/gpu/drm/i915/display/intel_connector.c 
b/drivers/gpu/drm/i915/display/intel_connector.c
index 9bed1ccecea0..89f0edf19182 100644
--- a/drivers/gpu/drm/i915/display/intel_connector.c
+++ b/drivers/gpu/drm/i915/display/intel_connector.c
@@ -241,34 +241,6 @@ intel_attach_force_audio_property(struct drm_connector 
*connector)
drm_object_attach_property(>base, prop, 0);
 }
 
-static const struct drm_prop_enum_list broadcast_rgb_names[] = {
-   { INTEL_BROADCAST_RGB_AUTO, "Automatic" },
-   { INTEL_BROADCAST_RGB_FULL, "Full" },
-   { INTEL_BROADCAST_RGB_LIMITED, "Limited 16:235" },
-};
-
-void
-intel_attach_broadcast_rgb_property(struct drm_connector *connector)
-{
-   struct drm_device *dev = connector->dev;
-   struct drm_i915_private *dev_priv = to_i915(dev);
-   struct drm_property *prop;
-
-   prop = dev_priv->broadcast_rgb_property;
-   if (prop == NULL) {
-   prop = drm_property_create_enum(dev, DRM_MODE_PROP_ENUM,
-  "Broadcast RGB",
-  broadcast_rgb_names,
-  ARRAY_SIZE(broadcast_rgb_names));
-   if (prop == NULL)
-   return;
-
-   dev_priv->broadcast_rgb_property = prop;
-   }
-
-   drm_object_attach_property(>base, prop, 0);
-}
-
 void
 intel_attach_aspect_ratio_property(struct drm_connector *connector)
 {
diff --git a/drivers/gpu/drm/i915/display/intel_connector.h 
b/drivers/gpu/drm/i915/display/intel_connector.h
index 661a37a3c6d8..f3058a035476 100644
--- a/drivers/gpu/drm/i915/display/intel_connector.h
+++ b/drivers/gpu/drm/i915/display/intel_connector.h
@@ -28,7 +28,6 @@ int intel_connector_update_modes(struct drm_connector 
*connector,
 struct edid *edid);
 int intel_ddc_get_modes(struct drm_connector *c, struct i2c_adapter *adapter);
 void intel_attach_force_audio_property(struct drm_connector *connector);
-void intel_attach_broadcast_rgb_property(struct drm_connector *connector);
 void intel_attach_aspect_ratio_property(struct drm_connector *connector);
 void intel_attach_hdmi_colorspace_property(struct drm_connector *connector);
 void intel_attach_dp_colorspace_property(struct drm_connector *connector);
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h 

[PATCH v5 17/17] drm/amd/display: Add handling for new "Broadcast RGB" property

2021-06-30 Thread Werner Sembach
This commit implements the "Broadcast RGB" drm property for the AMD GPU
driver.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 14 +++---
 .../amd/display/amdgpu_dm/amdgpu_dm_mst_types.c|  4 
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 02a5809d4993..80d5a11fb0c5 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5247,7 +5247,8 @@ get_aspect_ratio(const struct drm_display_mode *mode_in)
 }
 
 static enum dc_color_space
-get_output_color_space(const struct dc_crtc_timing *dc_crtc_timing)
+get_output_color_space(const struct dc_crtc_timing *dc_crtc_timing,
+  enum drm_mode_color_range preferred_color_range)
 {
enum dc_color_space color_space = COLOR_SPACE_SRGB;
 
@@ -5278,7 +5279,10 @@ get_output_color_space(const struct dc_crtc_timing 
*dc_crtc_timing)
}
break;
case PIXEL_ENCODING_RGB:
-   color_space = COLOR_SPACE_SRGB;
+   if (preferred_color_range == 
DRM_MODE_COLOR_RANGE_LIMITED_16_235)
+   color_space = COLOR_SPACE_SRGB_LIMITED;
+   else
+   color_space = COLOR_SPACE_SRGB;
break;
 
default:
@@ -5424,7 +5428,10 @@ static void fill_stream_properties_from_drm_display_mode(
 
timing_out->aspect_ratio = get_aspect_ratio(mode_in);
 
-   stream->output_color_space = get_output_color_space(timing_out);
+   stream->output_color_space = get_output_color_space(timing_out,
+   connector_state ?
+   
connector_state->preferred_color_range :
+   
DRM_MODE_COLOR_RANGE_UNSET);
 
stream->out_transfer_func->type = TF_TYPE_PREDEFINED;
stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB;
@@ -7775,6 +7782,7 @@ void amdgpu_dm_connector_init_helper(struct 
amdgpu_display_manager *dm,
drm_connector_attach_active_bpc_property(>base, 8, 
16);

drm_connector_attach_preferred_color_format_property(>base);

drm_connector_attach_active_color_format_property(>base);
+   
drm_connector_attach_preferred_color_range_property(>base);

drm_connector_attach_active_color_range_property(>base);
}
 
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 2563788ba95a..80e1389fd0ec 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -421,6 +421,10 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr 
*mgr,
if (connector->active_color_format_property)

drm_connector_attach_active_color_format_property(>base);
 
+   connector->preferred_color_range_property = 
master->base.preferred_color_range_property;
+   if (connector->preferred_color_range_property)
+   
drm_connector_attach_preferred_color_range_property(>base);
+
connector->active_color_range_property = 
master->base.active_color_range_property;
if (connector->active_color_range_property)

drm_connector_attach_active_color_range_property(>base);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 07/17] drm/amd/display: Add handling for new "active color format" property

2021-06-30 Thread Werner Sembach
This commit implements the "active color format" drm property for the AMD
GPU driver.

Signed-off-by: Werner Sembach 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 29 +--
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  4 +++
 2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index d984de82ae63..098f3d53e681 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -6710,6 +6710,24 @@ static int convert_dc_color_depth_into_bpc (enum 
dc_color_depth display_color_de
return 0;
 }
 
+static int convert_dc_pixel_encoding_into_drm_color_format(
+   enum dc_pixel_encoding display_pixel_encoding)
+{
+   switch (display_pixel_encoding) {
+   case PIXEL_ENCODING_RGB:
+   return DRM_COLOR_FORMAT_RGB444;
+   case PIXEL_ENCODING_YCBCR422:
+   return DRM_COLOR_FORMAT_YCRCB422;
+   case PIXEL_ENCODING_YCBCR444:
+   return DRM_COLOR_FORMAT_YCRCB444;
+   case PIXEL_ENCODING_YCBCR420:
+   return DRM_COLOR_FORMAT_YCRCB420;
+   default:
+   break;
+   }
+   return 0;
+}
+
 static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
  struct drm_crtc_state *crtc_state,
  struct drm_connector_state 
*conn_state)
@@ -7711,6 +7729,7 @@ void amdgpu_dm_connector_init_helper(struct 
amdgpu_display_manager *dm,
if (!aconnector->mst_port) {
drm_connector_attach_max_bpc_property(>base, 8, 16);
drm_connector_attach_active_bpc_property(>base, 8, 
16);
+   
drm_connector_attach_active_color_format_property(>base);
}
 
/* This defaults to the max in the range, but we want 8bpc for non-edp. 
*/
@@ -9090,14 +9109,20 @@ static void amdgpu_dm_atomic_commit_tail(struct 
drm_atomic_state *state)
dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
stream = dm_new_crtc_state->stream;
 
-   if (stream)
+   if (stream) {
drm_connector_set_active_bpc_property(connector,
stream->timing.flags.DSC ?

stream->timing.dsc_cfg.bits_per_pixel / 16 / 3 :
convert_dc_color_depth_into_bpc(

stream->timing.display_color_depth));
-   } else
+   
drm_connector_set_active_color_format_property(connector,
+   
convert_dc_pixel_encoding_into_drm_color_format(
+   
dm_new_crtc_state->stream->timing.pixel_encoding));
+   }
+   } else {
drm_connector_set_active_bpc_property(connector, 0);
+   
drm_connector_set_active_color_format_property(connector, 0);
+   }
}
 
/* Count number of newly disabled CRTCs for dropping PM refs later. */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 0cf38743ec47..13151d13aa73 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -413,6 +413,10 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr 
*mgr,
if (connector->active_bpc_property)
drm_connector_attach_active_bpc_property(>base, 8, 
16);
 
+   connector->active_color_format_property = 
master->base.active_color_format_property;
+   if (connector->active_color_format_property)
+   
drm_connector_attach_active_color_format_property(>base);
+
connector->vrr_capable_property = master->base.vrr_capable_property;
if (connector->vrr_capable_property)
drm_connector_attach_vrr_capable_property(connector);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 10/17] drm/amd/display: Add handling for new "active color range" property

2021-06-30 Thread Werner Sembach
This commit implements the "active color range" drm property for the AMD
GPU driver.

Signed-off-by: Werner Sembach 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 33 +++
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  4 +++
 2 files changed, 37 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 098f3d53e681..b4acedac1ac9 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -6728,6 +6728,33 @@ static int 
convert_dc_pixel_encoding_into_drm_color_format(
return 0;
 }
 
+static int convert_dc_color_space_into_drm_mode_color_range(enum 
dc_color_space color_space)
+{
+   if (color_space == COLOR_SPACE_SRGB ||
+   color_space == COLOR_SPACE_XR_RGB ||
+   color_space == COLOR_SPACE_MSREF_SCRGB ||
+   color_space == COLOR_SPACE_2020_RGB_FULLRANGE ||
+   color_space == COLOR_SPACE_ADOBERGB ||
+   color_space == COLOR_SPACE_DCIP3 ||
+   color_space == COLOR_SPACE_DOLBYVISION ||
+   color_space == COLOR_SPACE_YCBCR601 ||
+   color_space == COLOR_SPACE_XV_YCC_601 ||
+   color_space == COLOR_SPACE_YCBCR709 ||
+   color_space == COLOR_SPACE_XV_YCC_709 ||
+   color_space == COLOR_SPACE_2020_YCBCR ||
+   color_space == COLOR_SPACE_YCBCR709_BLACK ||
+   color_space == COLOR_SPACE_DISPLAYNATIVE ||
+   color_space == COLOR_SPACE_APPCTRL ||
+   color_space == COLOR_SPACE_CUSTOMPOINTS)
+   return DRM_MODE_COLOR_RANGE_FULL;
+   if (color_space == COLOR_SPACE_SRGB_LIMITED ||
+   color_space == COLOR_SPACE_2020_RGB_LIMITEDRANGE ||
+   color_space == COLOR_SPACE_YCBCR601_LIMITED ||
+   color_space == COLOR_SPACE_YCBCR709_LIMITED)
+   return DRM_MODE_COLOR_RANGE_LIMITED_16_235;
+   return DRM_MODE_COLOR_RANGE_UNSET;
+}
+
 static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
  struct drm_crtc_state *crtc_state,
  struct drm_connector_state 
*conn_state)
@@ -7730,6 +7757,7 @@ void amdgpu_dm_connector_init_helper(struct 
amdgpu_display_manager *dm,
drm_connector_attach_max_bpc_property(>base, 8, 16);
drm_connector_attach_active_bpc_property(>base, 8, 
16);

drm_connector_attach_active_color_format_property(>base);
+   
drm_connector_attach_active_color_range_property(>base);
}
 
/* This defaults to the max in the range, but we want 8bpc for non-edp. 
*/
@@ -9118,10 +9146,15 @@ static void amdgpu_dm_atomic_commit_tail(struct 
drm_atomic_state *state)

drm_connector_set_active_color_format_property(connector,

convert_dc_pixel_encoding_into_drm_color_format(

dm_new_crtc_state->stream->timing.pixel_encoding));
+   
drm_connector_set_active_color_range_property(connector,
+   
convert_dc_color_space_into_drm_mode_color_range(
+   
dm_new_crtc_state->stream->output_color_space));
}
} else {
drm_connector_set_active_bpc_property(connector, 0);

drm_connector_set_active_color_format_property(connector, 0);
+   drm_connector_set_active_color_range_property(connector,
+ 
DRM_MODE_COLOR_RANGE_UNSET);
}
}
 
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 13151d13aa73..b5d57bbbdd20 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -417,6 +417,10 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr 
*mgr,
if (connector->active_color_format_property)

drm_connector_attach_active_color_format_property(>base);
 
+   connector->active_color_range_property = 
master->base.active_color_range_property;
+   if (connector->active_color_range_property)
+   
drm_connector_attach_active_color_range_property(>base);
+
connector->vrr_capable_property = master->base.vrr_capable_property;
if (connector->vrr_capable_property)
drm_connector_attach_vrr_capable_property(connector);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 14/17] drm/i915/display: Add handling for new "preferred color format" property

2021-06-30 Thread Werner Sembach
This commit implements the "preferred color format" drm property for the
Intel GPU driver.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/i915/display/intel_dp.c | 7 ++-
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 +
 drivers/gpu/drm/i915/display/intel_hdmi.c   | 6 ++
 3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
b/drivers/gpu/drm/i915/display/intel_dp.c
index fd33f753244d..29bb181ec4be 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -616,9 +616,12 @@ intel_dp_output_format(struct drm_connector *connector,
 {
struct intel_dp *intel_dp = 
intel_attached_dp(to_intel_connector(connector));
const struct drm_display_info *info = >display_info;
+   const struct drm_connector_state *connector_state = connector->state;
 
if (!connector->ycbcr_420_allowed ||
-   !drm_mode_is_420_only(info, mode))
+   !(drm_mode_is_420_only(info, mode) ||
+   (drm_mode_is_420_also(info, mode) && connector_state &&
+   connector_state->preferred_color_format == 
DRM_COLOR_FORMAT_YCRCB420)))
return INTEL_OUTPUT_FORMAT_RGB;
 
if (intel_dp->dfp.rgb_to_ycbcr &&
@@ -4692,10 +4695,12 @@ intel_dp_add_properties(struct intel_dp *intel_dp, 
struct drm_connector *connect
if (HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 6, 10);
drm_connector_attach_active_bpc_property(connector, 6, 10);
+   drm_connector_attach_preferred_color_format_property(connector);
drm_connector_attach_active_color_format_property(connector);
} else if (DISPLAY_VER(dev_priv) >= 5) {
drm_connector_attach_max_bpc_property(connector, 6, 12);
drm_connector_attach_active_bpc_property(connector, 6, 12);
+   drm_connector_attach_preferred_color_format_property(connector);
drm_connector_attach_active_color_format_property(connector);
}
 
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c 
b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index cb876175258f..67f0fb649876 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -856,6 +856,11 @@ static struct drm_connector 
*intel_dp_add_mst_connector(struct drm_dp_mst_topolo
if (connector->active_bpc_property)
drm_connector_attach_active_bpc_property(connector, 6, 12);
 
+   connector->preferred_color_format_property =
+   
intel_dp->attached_connector->base.preferred_color_format_property;
+   if (connector->preferred_color_format_property)
+   drm_connector_attach_preferred_color_format_property(connector);
+
connector->active_color_format_property =
intel_dp->attached_connector->base.active_color_format_property;
if (connector->active_color_format_property)
diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
b/drivers/gpu/drm/i915/display/intel_hdmi.c
index 3ee25e0cc3b9..a7b85cd13227 100644
--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
@@ -2153,6 +2153,11 @@ static int intel_hdmi_compute_output_format(struct 
intel_encoder *encoder,
crtc_state->output_format = INTEL_OUTPUT_FORMAT_RGB;
}
 
+   if (connector->ycbcr_420_allowed &&
+   conn_state->preferred_color_format == DRM_COLOR_FORMAT_YCRCB420 &&
+   drm_mode_is_420_also(>display_info, adjusted_mode))
+   crtc_state->output_format = INTEL_OUTPUT_FORMAT_YCBCR420;
+
ret = intel_hdmi_compute_clock(encoder, crtc_state);
if (ret) {
if (crtc_state->output_format != INTEL_OUTPUT_FORMAT_YCBCR420 &&
@@ -2517,6 +2522,7 @@ intel_hdmi_add_properties(struct intel_hdmi *intel_hdmi, 
struct drm_connector *c
if (!HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 8, 12);
drm_connector_attach_active_bpc_property(connector, 8, 12);
+   drm_connector_attach_preferred_color_format_property(connector);
drm_connector_attach_active_color_format_property(connector);
}
 }
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 12/17] drm/uAPI: Add "preferred color format" drm property as setting for userspace

2021-06-30 Thread Werner Sembach
Add a new general drm property "preferred color format" which can be used
by userspace to tell the graphic drivers to which color format to use.

Possible options are:
- auto (default/current behaviour)
- rgb
- ycbcr444
- ycbcr422 (not supported by both amdgpu and i915)
- ycbcr420

In theory the auto option should choose the best available option for the
current setup, but because of bad internal conversion some monitors look
better with rgb and some with ycbcr444.

Also, because of bad shielded connectors and/or cables, it might be
preferable to use the less bandwidth heavy ycbcr422 and ycbcr420 formats
for a signal that is less deceptible to interference.

In the future, automatic color calibration for screens might also depend on
this option being available.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/drm_atomic_helper.c |  4 +++
 drivers/gpu/drm/drm_atomic_uapi.c   |  4 +++
 drivers/gpu/drm/drm_connector.c | 50 -
 include/drm/drm_connector.h | 17 ++
 4 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_atomic_helper.c 
b/drivers/gpu/drm/drm_atomic_helper.c
index bc3487964fb5..90d62f305257 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -687,6 +687,10 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
if (old_connector_state->max_requested_bpc !=
new_connector_state->max_requested_bpc)
new_crtc_state->connectors_changed = true;
+
+   if (old_connector_state->preferred_color_format !=
+   new_connector_state->preferred_color_format)
+   new_crtc_state->connectors_changed = true;
}
 
if (funcs->atomic_check)
diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
b/drivers/gpu/drm/drm_atomic_uapi.c
index 438e9585b225..c536f5e22016 100644
--- a/drivers/gpu/drm/drm_atomic_uapi.c
+++ b/drivers/gpu/drm/drm_atomic_uapi.c
@@ -796,6 +796,8 @@ static int drm_atomic_connector_set_property(struct 
drm_connector *connector,
   fence_ptr);
} else if (property == connector->max_bpc_property) {
state->max_requested_bpc = val;
+   } else if (property == connector->preferred_color_format_property) {
+   state->preferred_color_format = val;
} else if (connector->funcs->atomic_set_property) {
return connector->funcs->atomic_set_property(connector,
state, property, val);
@@ -873,6 +875,8 @@ drm_atomic_connector_get_property(struct drm_connector 
*connector,
*val = 0;
} else if (property == connector->max_bpc_property) {
*val = state->max_requested_bpc;
+   } else if (property == connector->preferred_color_format_property) {
+   *val = state->preferred_color_format;
} else if (connector->funcs->atomic_get_property) {
return connector->funcs->atomic_get_property(connector,
state, property, val);
diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index ebfdd17a7f59..67dd59b12258 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -889,6 +889,14 @@ static const struct drm_prop_enum_list 
drm_dp_subconnector_enum_list[] = {
{ DRM_MODE_SUBCONNECTOR_Native,  "Native"}, /* DP */
 };
 
+static const struct drm_prop_enum_list drm_preferred_color_format_enum_list[] 
= {
+   { 0, "auto" },
+   { DRM_COLOR_FORMAT_RGB444, "rgb" },
+   { DRM_COLOR_FORMAT_YCRCB444, "ycbcr444" },
+   { DRM_COLOR_FORMAT_YCRCB422, "ycbcr422" },
+   { DRM_COLOR_FORMAT_YCRCB420, "ycbcr420" },
+};
+
 static const struct drm_prop_enum_list drm_active_color_format_enum_list[] = {
{ 0, "not applicable" },
{ DRM_COLOR_FORMAT_RGB444, "rgb" },
@@ -1220,11 +1228,20 @@ static const struct drm_prop_enum_list dp_colorspaces[] 
= {
  * Drivers shall use drm_connector_attach_active_bpc_property() to install
  * this property. A value of 0 means "not applicable".
  *
+ * preferred color format:
+ * This property is used by userspace to change the used color format. When
+ * used the driver will use the selected format if valid for the hardware,
+ * sink, and current resolution and refresh rate combination. Drivers to
+ * use the function drm_connector_attach_preferred_color_format_property()
+ * to create and attach the property to the connector during
+ * initialization. Possible values are "auto", "rgb", "ycbcr444",
+ * "ycbcr422", and "ycbcr420".
+ *
  * active color format:
  * This read-only property tells userspace the color format actually used
  * by the hardware display engine "on the cable" on a connector. The chosen
  *

[PATCH v5 13/17] drm/amd/display: Add handling for new "preferred color format" property

2021-06-30 Thread Werner Sembach
This commit implements the "preferred color format" drm property for the
AMD GPU driver.

Signed-off-by: Werner Sembach 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 30 +++
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  4 +++
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index b4acedac1ac9..02a5809d4993 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5346,15 +5346,32 @@ static void 
fill_stream_properties_from_drm_display_mode(
timing_out->h_border_right = 0;
timing_out->v_border_top = 0;
timing_out->v_border_bottom = 0;
-   /* TODO: un-hardcode */
-   if (drm_mode_is_420_only(info, mode_in)
-   || (drm_mode_is_420_also(info, mode_in) && 
aconnector->force_yuv420_output))
+
+   if (connector_state
+   && (connector_state->preferred_color_format == 
DRM_COLOR_FORMAT_YCRCB420
+   || aconnector->force_yuv420_output) && 
drm_mode_is_420(info, mode_in))
timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
-   else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
-   && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
+   else if (connector_state
+   && connector_state->preferred_color_format == 
DRM_COLOR_FORMAT_YCRCB444
+   && connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
-   else
+   else if (connector_state
+   && connector_state->preferred_color_format == 
DRM_COLOR_FORMAT_RGB444
+   && !drm_mode_is_420_only(info, mode_in))
timing_out->pixel_encoding = PIXEL_ENCODING_RGB;
+   else
+   /*
+* connector_state->preferred_color_format not possible
+* || connector_state->preferred_color_format == 0 (auto)
+* || connector_state->preferred_color_format == 
DRM_COLOR_FORMAT_YCRCB422
+*/
+   if (drm_mode_is_420_only(info, mode_in))
+   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
+   else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
+   && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
+   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444;
+   else
+   timing_out->pixel_encoding = PIXEL_ENCODING_RGB;
 
timing_out->timing_3d_format = TIMING_3D_FORMAT_NONE;
timing_out->display_color_depth = convert_color_depth_from_display_info(
@@ -7756,6 +7773,7 @@ void amdgpu_dm_connector_init_helper(struct 
amdgpu_display_manager *dm,
if (!aconnector->mst_port) {
drm_connector_attach_max_bpc_property(>base, 8, 16);
drm_connector_attach_active_bpc_property(>base, 8, 
16);
+   
drm_connector_attach_preferred_color_format_property(>base);

drm_connector_attach_active_color_format_property(>base);

drm_connector_attach_active_color_range_property(>base);
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index b5d57bbbdd20..2563788ba95a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -413,6 +413,10 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr 
*mgr,
if (connector->active_bpc_property)
drm_connector_attach_active_bpc_property(>base, 8, 
16);
 
+   connector->preferred_color_format_property = 
master->base.preferred_color_format_property;
+   if (connector->preferred_color_format_property)
+   
drm_connector_attach_preferred_color_format_property(>base);
+
connector->active_color_format_property = 
master->base.active_color_format_property;
if (connector->active_color_format_property)

drm_connector_attach_active_color_format_property(>base);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 06/17] drm/uAPI: Add "active color format" drm property as feedback for userspace

2021-06-30 Thread Werner Sembach
Add a new general drm property "active color format" which can be used by
graphic drivers to report the used color format back to userspace.

There was no way to check which color format got actually used on a given
monitor. To surely predict this, one must know the exact capabilities of
the monitor, the GPU, and the connection used and what the default
behaviour of the used driver is (e.g. amdgpu prefers YCbCr 4:4:4 while i915
prefers RGB). This property helps eliminating the guessing on this point.

In the future, automatic color calibration for screens might also depend on
this information being available.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/drm_connector.c | 63 +
 include/drm/drm_connector.h |  9 +
 2 files changed, 72 insertions(+)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 6461d00e8e49..075bdc08d5c3 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -889,6 +889,14 @@ static const struct drm_prop_enum_list 
drm_dp_subconnector_enum_list[] = {
{ DRM_MODE_SUBCONNECTOR_Native,  "Native"}, /* DP */
 };
 
+static const struct drm_prop_enum_list drm_active_color_format_enum_list[] = {
+   { 0, "not applicable" },
+   { DRM_COLOR_FORMAT_RGB444, "rgb" },
+   { DRM_COLOR_FORMAT_YCRCB444, "ycbcr444" },
+   { DRM_COLOR_FORMAT_YCRCB422, "ycbcr422" },
+   { DRM_COLOR_FORMAT_YCRCB420, "ycbcr420" },
+};
+
 DRM_ENUM_NAME_FN(drm_get_dp_subconnector_name,
 drm_dp_subconnector_enum_list)
 
@@ -1206,6 +1214,15 @@ static const struct drm_prop_enum_list dp_colorspaces[] 
= {
  * Drivers shall use drm_connector_attach_active_bpc_property() to install
  * this property. A value of 0 means "not applicable".
  *
+ * active color format:
+ * This read-only property tells userspace the color format actually used
+ * by the hardware display engine "on the cable" on a connector. The chosen
+ * value depends on hardware capabilities, both display engine and
+ * connected monitor. Drivers shall use
+ * drm_connector_attach_active_color_format_property() to install this
+ * property. Possible values are "not applicable", "rgb", "ycbcr444",
+ * "ycbcr422", and "ycbcr420".
+ *
  * Connectors also have one standardized atomic property:
  *
  * CRTC_ID:
@@ -2205,6 +,52 @@ void drm_connector_set_active_bpc_property(struct 
drm_connector *connector, int
 }
 EXPORT_SYMBOL(drm_connector_set_active_bpc_property);
 
+/**
+ * drm_connector_attach_active_color_format_property - attach "active color 
format" property
+ * @connector: connector to attach active color format property on.
+ *
+ * This is used to check the applied color format on a connector.
+ *
+ * Returns:
+ * Zero on success, negative errno on failure.
+ */
+int drm_connector_attach_active_color_format_property(struct drm_connector 
*connector)
+{
+   struct drm_device *dev = connector->dev;
+   struct drm_property *prop;
+
+   if (!connector->active_color_format_property) {
+   prop = drm_property_create_enum(dev, DRM_MODE_PROP_IMMUTABLE, 
"active color format",
+   
drm_active_color_format_enum_list,
+   
ARRAY_SIZE(drm_active_color_format_enum_list));
+   if (!prop)
+   return -ENOMEM;
+
+   connector->active_color_format_property = prop;
+   }
+
+   drm_object_attach_property(>base, prop, 0);
+
+   return 0;
+}
+EXPORT_SYMBOL(drm_connector_attach_active_color_format_property);
+
+/**
+ * drm_connector_set_active_color_format_property - sets the active color 
format property for a
+ * connector
+ * @connector: drm connector
+ * @active_color_format: color format for the connector currently active "on 
the cable"
+ *
+ * Should be used by atomic drivers to update the active color format over a 
connector.
+ */
+void drm_connector_set_active_color_format_property(struct drm_connector 
*connector,
+   u32 active_color_format)
+{
+   drm_object_property_set_value(>base, 
connector->active_color_format_property,
+ active_color_format);
+}
+EXPORT_SYMBOL(drm_connector_set_active_color_format_property);
+
 /**
  * drm_connector_attach_hdr_output_metadata_property - attach 
"HDR_OUTPUT_METADA" property
  * @connector: connector to attach the property on.
diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
index eee86de62a5f..8a5197f14e87 100644
--- a/include/drm/drm_connector.h
+++ b/include/drm/drm_connector.h
@@ -1386,6 +1386,12 @@ struct drm_connector {
 */
struct drm_property *active_bpc_property;
 
+   /**
+* @active_color_format_property: Default connector property for the
+* active color format to be driven out of the connector.
+*/
+   

[PATCH v5 11/17] drm/i915/display: Add handling for new "active color range" property

2021-06-30 Thread Werner Sembach
This commit implements the "active color range" drm property for the Intel
GPU driver.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/i915/display/intel_display.c | 7 +++
 drivers/gpu/drm/i915/display/intel_dp.c  | 1 +
 drivers/gpu/drm/i915/display/intel_dp_mst.c  | 5 +
 drivers/gpu/drm/i915/display/intel_hdmi.c| 1 +
 4 files changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index be38f7148285..b0bcb42a97fc 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -10940,9 +10940,16 @@ static int intel_atomic_commit(struct drm_device *dev,

drm_connector_set_active_color_format_property(connector,

convert_intel_output_format_into_drm_color_format(
new_crtc_state->output_format));
+   drm_connector_set_active_color_range_property(connector,
+   new_crtc_state->limited_color_range ||
+   new_crtc_state->output_format != 
INTEL_OUTPUT_FORMAT_RGB ?
+   DRM_MODE_COLOR_RANGE_LIMITED_16_235 :
+   DRM_MODE_COLOR_RANGE_FULL);
} else {
drm_connector_set_active_bpc_property(connector, 0);

drm_connector_set_active_color_format_property(connector, 0);
+   drm_connector_set_active_color_range_property(connector,
+ 
DRM_MODE_COLOR_RANGE_UNSET);
}
}
 
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
b/drivers/gpu/drm/i915/display/intel_dp.c
index 6b85bcdeb238..fd33f753244d 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -4688,6 +4688,7 @@ intel_dp_add_properties(struct intel_dp *intel_dp, struct 
drm_connector *connect
intel_attach_force_audio_property(connector);
 
intel_attach_broadcast_rgb_property(connector);
+   drm_connector_attach_active_color_range_property(connector);
if (HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 6, 10);
drm_connector_attach_active_bpc_property(connector, 6, 10);
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c 
b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 3e4237df3360..cb876175258f 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -861,6 +861,11 @@ static struct drm_connector 
*intel_dp_add_mst_connector(struct drm_dp_mst_topolo
if (connector->active_color_format_property)
drm_connector_attach_active_color_format_property(connector);
 
+   connector->active_color_range_property =
+   intel_dp->attached_connector->base.active_color_range_property;
+   if (connector->active_color_range_property)
+   drm_connector_attach_active_color_range_property(connector);
+
return connector;
 
 err:
diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
b/drivers/gpu/drm/i915/display/intel_hdmi.c
index 367aba57b55f..3ee25e0cc3b9 100644
--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
@@ -2505,6 +2505,7 @@ intel_hdmi_add_properties(struct intel_hdmi *intel_hdmi, 
struct drm_connector *c
 
intel_attach_force_audio_property(connector);
intel_attach_broadcast_rgb_property(connector);
+   drm_connector_attach_active_color_range_property(connector);
intel_attach_aspect_ratio_property(connector);
 
intel_attach_hdmi_colorspace_property(connector);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 09/17] drm/uAPI: Add "active color range" drm property as feedback for userspace

2021-06-30 Thread Werner Sembach
Add a new general drm property "active color range" which can be used by
graphic drivers to report the used color range back to userspace.

There was no way to check which color range got actually used on a given
monitor. To surely predict this, one must know the exact capabilities of
the monitor and what the default behaviour of the used driver is. This
property helps eliminating the guessing at this point.

In the future, automatic color calibration for screens might also depend on
this information being available.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/drm_connector.c | 61 +
 include/drm/drm_connector.h | 27 +++
 2 files changed, 88 insertions(+)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 075bdc08d5c3..ebfdd17a7f59 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -897,6 +897,12 @@ static const struct drm_prop_enum_list 
drm_active_color_format_enum_list[] = {
{ DRM_COLOR_FORMAT_YCRCB420, "ycbcr420" },
 };
 
+static const struct drm_prop_enum_list drm_active_color_range_enum_list[] = {
+   { DRM_MODE_COLOR_RANGE_UNSET, "Not Applicable" },
+   { DRM_MODE_COLOR_RANGE_FULL, "Full" },
+   { DRM_MODE_COLOR_RANGE_LIMITED_16_235, "Limited 16:235" },
+};
+
 DRM_ENUM_NAME_FN(drm_get_dp_subconnector_name,
 drm_dp_subconnector_enum_list)
 
@@ -1223,6 +1229,15 @@ static const struct drm_prop_enum_list dp_colorspaces[] 
= {
  * property. Possible values are "not applicable", "rgb", "ycbcr444",
  * "ycbcr422", and "ycbcr420".
  *
+ * active color range:
+ * This read-only property tells userspace the color range actually used by
+ * the hardware display engine "on the cable" on a connector. The chosen
+ * value depends on hardware capabilities of the monitor and the used color
+ * format. Drivers shall use
+ * drm_connector_attach_active_color_range_property() to install this
+ * property. Possible values are "Not Applicable", "Full", and "Limited
+ * 16:235".
+ *
  * Connectors also have one standardized atomic property:
  *
  * CRTC_ID:
@@ -2268,6 +2283,52 @@ void 
drm_connector_set_active_color_format_property(struct drm_connector *connec
 }
 EXPORT_SYMBOL(drm_connector_set_active_color_format_property);
 
+/**
+ * drm_connector_attach_active_color_range_property - attach "active color 
range" property
+ * @connector: connector to attach active color range property on.
+ *
+ * This is used to check the applied color range on a connector.
+ *
+ * Returns:
+ * Zero on success, negative errno on failure.
+ */
+int drm_connector_attach_active_color_range_property(struct drm_connector 
*connector)
+{
+   struct drm_device *dev = connector->dev;
+   struct drm_property *prop;
+
+   if (!connector->active_color_range_property) {
+   prop = drm_property_create_enum(dev, DRM_MODE_PROP_IMMUTABLE, 
"active color range",
+   
drm_active_color_range_enum_list,
+   
ARRAY_SIZE(drm_active_color_range_enum_list));
+   if (!prop)
+   return -ENOMEM;
+
+   connector->active_color_range_property = prop;
+   }
+
+   drm_object_attach_property(>base, prop, 
DRM_MODE_COLOR_RANGE_UNSET);
+
+   return 0;
+}
+EXPORT_SYMBOL(drm_connector_attach_active_color_range_property);
+
+/**
+ * drm_connector_set_active_color_range_property - sets the active color range 
property for a
+ * connector
+ * @connector: drm connector
+ * @active_color_range: color range for the connector currently active "on the 
cable"
+ *
+ * Should be used by atomic drivers to update the active color range over a 
connector.
+ */
+void drm_connector_set_active_color_range_property(struct drm_connector 
*connector,
+  enum drm_mode_color_range 
active_color_range)
+{
+   drm_object_property_set_value(>base, 
connector->active_color_range_property,
+ active_color_range);
+}
+EXPORT_SYMBOL(drm_connector_set_active_color_range_property);
+
 /**
  * drm_connector_attach_hdr_output_metadata_property - attach 
"HDR_OUTPUT_METADA" property
  * @connector: connector to attach the property on.
diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
index 8a5197f14e87..5ef4bb270f71 100644
--- a/include/drm/drm_connector.h
+++ b/include/drm/drm_connector.h
@@ -648,6 +648,24 @@ struct drm_tv_connector_state {
unsigned int hue;
 };
 
+/**
+ * enum drm_mode_color_range - color_range info for _connector
+ *
+ * This enum is used to represent full or limited color range on the display
+ * connector signal.
+ *
+ * @DRM_MODE_COLOR_RANGE_UNSET: Color range is 
unspecified/default.
+ * @DRM_MODE_COLOR_RANGE_FULL:  Color range is full range, 0-255 for
+ * 

[PATCH v5 01/17] drm/amd/display: Remove unnecessary SIGNAL_TYPE_HDMI_TYPE_A check

2021-06-30 Thread Werner Sembach
Remove unnecessary SIGNAL_TYPE_HDMI_TYPE_A check that was performed in the
drm_mode_is_420_only() case, but not in the drm_mode_is_420_also() &&
force_yuv420_output case.

Without further knowledge if YCbCr 4:2:0 is supported outside of HDMI,
there is no reason to use RGB when the display
reports drm_mode_is_420_only() even on a non HDMI connection.

This patch also moves both checks in the same if-case. This  eliminates an
extra else-if-case.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 10f878910e55..e081dd3ffb5f 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -5348,10 +5348,7 @@ static void fill_stream_properties_from_drm_display_mode(
timing_out->v_border_bottom = 0;
/* TODO: un-hardcode */
if (drm_mode_is_420_only(info, mode_in)
-   && stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
-   timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
-   else if (drm_mode_is_420_also(info, mode_in)
-   && aconnector->force_yuv420_output)
+   || (drm_mode_is_420_also(info, mode_in) && 
aconnector->force_yuv420_output))
timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
else if ((connector->display_info.color_formats & 
DRM_COLOR_FORMAT_YCRCB444)
&& stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 03/17] drm/uAPI: Add "active bpc" as feedback channel for "max bpc" drm property

2021-06-30 Thread Werner Sembach
Add a new general drm property "active bpc" which can be used by graphic
drivers to report the applied bit depth per pixel color back to userspace.

While "max bpc" can be used to change the color depth, there was no way to
check which one actually got used. While in theory the driver chooses the
best/highest color depth within the max bpc setting a user might not be
fully aware what his hardware is or isn't capable off. This is meant as a
quick way to double check the setup.

In the future, automatic color calibration for screens might also depend on
this information being available.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/drm_connector.c | 53 +
 include/drm/drm_connector.h |  8 +
 2 files changed, 61 insertions(+)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index da39e7ff6965..6461d00e8e49 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -1197,6 +1197,15 @@ static const struct drm_prop_enum_list dp_colorspaces[] 
= {
  * drm_connector_attach_max_bpc_property() to create and attach the
  * property to the connector during initialization.
  *
+ * active bpc:
+ * This read-only range property tells userspace the pixel color bit depth
+ * actually used by the hardware display engine "on the cable" on a
+ * connector. This means after display stream compression and dithering
+ * done by the GPU. The chosen value depends on hardware capabilities, both
+ * display engine and connected monitor, and the "max bpc" property.
+ * Drivers shall use drm_connector_attach_active_bpc_property() to install
+ * this property. A value of 0 means "not applicable".
+ *
  * Connectors also have one standardized atomic property:
  *
  * CRTC_ID:
@@ -2152,6 +2161,50 @@ int drm_connector_attach_max_bpc_property(struct 
drm_connector *connector,
 }
 EXPORT_SYMBOL(drm_connector_attach_max_bpc_property);
 
+/**
+ * drm_connector_attach_active_bpc_property - attach "active bpc" property
+ * @connector: connector to attach active bpc property on.
+ * @min: The minimum bit depth supported by the connector.
+ * @max: The maximum bit depth supported by the connector.
+ *
+ * This is used to check the applied bit depth on a connector.
+ *
+ * Returns:
+ * Zero on success, negative errno on failure.
+ */
+int drm_connector_attach_active_bpc_property(struct drm_connector *connector, 
int min, int max)
+{
+   struct drm_device *dev = connector->dev;
+   struct drm_property *prop;
+
+   if (!connector->active_bpc_property) {
+   prop = drm_property_create_range(dev, DRM_MODE_PROP_IMMUTABLE, 
"active bpc",
+min, max);
+   if (!prop)
+   return -ENOMEM;
+
+   connector->active_bpc_property = prop;
+   }
+
+   drm_object_attach_property(>base, prop, 0);
+
+   return 0;
+}
+EXPORT_SYMBOL(drm_connector_attach_active_bpc_property);
+
+/**
+ * drm_connector_set_active_bpc_property - sets the active bits per color 
property for a connector
+ * @connector: drm connector
+ * @active_bpc: bits per color for the connector currently active "on the 
cable"
+ *
+ * Should be used by atomic drivers to update the active bits per color over a 
connector.
+ */
+void drm_connector_set_active_bpc_property(struct drm_connector *connector, 
int active_bpc)
+{
+   drm_object_property_set_value(>base, 
connector->active_bpc_property, active_bpc);
+}
+EXPORT_SYMBOL(drm_connector_set_active_bpc_property);
+
 /**
  * drm_connector_attach_hdr_output_metadata_property - attach 
"HDR_OUTPUT_METADA" property
  * @connector: connector to attach the property on.
diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
index 714d1a01c065..eee86de62a5f 100644
--- a/include/drm/drm_connector.h
+++ b/include/drm/drm_connector.h
@@ -1380,6 +1380,12 @@ struct drm_connector {
 */
struct drm_property *max_bpc_property;
 
+   /**
+* @active_bpc_property: Default connector property for the active bpc
+* to be driven out of the connector.
+*/
+   struct drm_property *active_bpc_property;
+
 #define DRM_CONNECTOR_POLL_HPD (1 << 0)
 #define DRM_CONNECTOR_POLL_CONNECT (1 << 1)
 #define DRM_CONNECTOR_POLL_DISCONNECT (1 << 2)
@@ -1702,6 +1708,8 @@ int drm_connector_set_panel_orientation_with_quirk(
int width, int height);
 int drm_connector_attach_max_bpc_property(struct drm_connector *connector,
  int min, int max);
+int drm_connector_attach_active_bpc_property(struct drm_connector *connector, 
int min, int max);
+void drm_connector_set_active_bpc_property(struct drm_connector *connector, 
int active_bpc);
 
 /**
  * struct drm_tile_group - Tile group metadata
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org

[PATCH v5 08/17] drm/i915/display: Add handling for new "active color format" property

2021-06-30 Thread Werner Sembach
This commit implements the "active color format" drm property for the Intel
GPU driver.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/i915/display/intel_display.c | 22 +++-
 drivers/gpu/drm/i915/display/intel_dp.c  |  2 ++
 drivers/gpu/drm/i915/display/intel_dp_mst.c  |  5 +
 drivers/gpu/drm/i915/display/intel_hdmi.c|  1 +
 4 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 1b63d1404d06..be38f7148285 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -10609,6 +10609,21 @@ static void 
intel_atomic_prepare_plane_clear_colors(struct intel_atomic_state *s
}
 }
 
+static int convert_intel_output_format_into_drm_color_format(enum 
intel_output_format output_format)
+{
+   switch (output_format) {
+   case INTEL_OUTPUT_FORMAT_RGB:
+   return DRM_COLOR_FORMAT_RGB444;
+   case INTEL_OUTPUT_FORMAT_YCBCR420:
+   return DRM_COLOR_FORMAT_YCRCB420;
+   case INTEL_OUTPUT_FORMAT_YCBCR444:
+   return DRM_COLOR_FORMAT_YCRCB444;
+   default:
+   break;
+   }
+   return 0;
+}
+
 static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 {
struct drm_device *dev = state->base.dev;
@@ -10922,8 +10937,13 @@ static int intel_atomic_commit(struct drm_device *dev,
new_crtc_state->dsc.compression_enable ?
new_crtc_state->dsc.compressed_bpp / 3 :
new_crtc_state->pipe_bpp / 3);
-   } else
+   
drm_connector_set_active_color_format_property(connector,
+   
convert_intel_output_format_into_drm_color_format(
+   new_crtc_state->output_format));
+   } else {
drm_connector_set_active_bpc_property(connector, 0);
+   
drm_connector_set_active_color_format_property(connector, 0);
+   }
}
 
drm_atomic_state_get(>base);
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
b/drivers/gpu/drm/i915/display/intel_dp.c
index 815bc313b954..6b85bcdeb238 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -4691,9 +4691,11 @@ intel_dp_add_properties(struct intel_dp *intel_dp, 
struct drm_connector *connect
if (HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 6, 10);
drm_connector_attach_active_bpc_property(connector, 6, 10);
+   drm_connector_attach_active_color_format_property(connector);
} else if (DISPLAY_VER(dev_priv) >= 5) {
drm_connector_attach_max_bpc_property(connector, 6, 12);
drm_connector_attach_active_bpc_property(connector, 6, 12);
+   drm_connector_attach_active_color_format_property(connector);
}
 
/* Register HDMI colorspace for case of lspcon */
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c 
b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 16bfc59570a5..3e4237df3360 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -856,6 +856,11 @@ static struct drm_connector 
*intel_dp_add_mst_connector(struct drm_dp_mst_topolo
if (connector->active_bpc_property)
drm_connector_attach_active_bpc_property(connector, 6, 12);
 
+   connector->active_color_format_property =
+   intel_dp->attached_connector->base.active_color_format_property;
+   if (connector->active_color_format_property)
+   drm_connector_attach_active_color_format_property(connector);
+
return connector;
 
 err:
diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
b/drivers/gpu/drm/i915/display/intel_hdmi.c
index 9160e21ac9d6..367aba57b55f 100644
--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
@@ -2516,6 +2516,7 @@ intel_hdmi_add_properties(struct intel_hdmi *intel_hdmi, 
struct drm_connector *c
if (!HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 8, 12);
drm_connector_attach_active_bpc_property(connector, 8, 12);
+   drm_connector_attach_active_color_format_property(connector);
}
 }
 
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 05/17] drm/i915/display: Add handling for new "active bpc" property

2021-06-30 Thread Werner Sembach
This commit implements the "active bpc" drm property for the Intel GPU
driver.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/i915/display/intel_display.c | 19 +++
 drivers/gpu/drm/i915/display/intel_dp.c  |  7 +--
 drivers/gpu/drm/i915/display/intel_dp_mst.c  |  5 +
 drivers/gpu/drm/i915/display/intel_hdmi.c|  4 +++-
 4 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 6be1b31af07b..1b63d1404d06 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -10839,6 +10839,9 @@ static int intel_atomic_commit(struct drm_device *dev,
 {
struct intel_atomic_state *state = to_intel_atomic_state(_state);
struct drm_i915_private *dev_priv = to_i915(dev);
+   struct drm_connector *connector;
+   struct drm_connector_state *new_conn_state;
+   int i;
int ret = 0;
 
state->wakeref = intel_runtime_pm_get(_priv->runtime_pm);
@@ -10907,6 +10910,22 @@ static int intel_atomic_commit(struct drm_device *dev,
intel_shared_dpll_swap_state(state);
intel_atomic_track_fbs(state);
 
+   /* Extract information from crtc to communicate it to userspace as 
connector properties */
+   for_each_new_connector_in_state(>base, connector, 
new_conn_state, i) {
+   struct intel_crtc *crtc = to_intel_crtc(new_conn_state->crtc);
+
+   if (crtc) {
+   struct intel_crtc_state *new_crtc_state =
+   intel_atomic_get_new_crtc_state(state, crtc);
+
+   drm_connector_set_active_bpc_property(connector,
+   new_crtc_state->dsc.compression_enable ?
+   new_crtc_state->dsc.compressed_bpp / 3 :
+   new_crtc_state->pipe_bpp / 3);
+   } else
+   drm_connector_set_active_bpc_property(connector, 0);
+   }
+
drm_atomic_state_get(>base);
INIT_WORK(>base.commit_work, intel_atomic_commit_work);
 
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
b/drivers/gpu/drm/i915/display/intel_dp.c
index 6cc03b9e4321..815bc313b954 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -4688,10 +4688,13 @@ intel_dp_add_properties(struct intel_dp *intel_dp, 
struct drm_connector *connect
intel_attach_force_audio_property(connector);
 
intel_attach_broadcast_rgb_property(connector);
-   if (HAS_GMCH(dev_priv))
+   if (HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 6, 10);
-   else if (DISPLAY_VER(dev_priv) >= 5)
+   drm_connector_attach_active_bpc_property(connector, 6, 10);
+   } else if (DISPLAY_VER(dev_priv) >= 5) {
drm_connector_attach_max_bpc_property(connector, 6, 12);
+   drm_connector_attach_active_bpc_property(connector, 6, 12);
+   }
 
/* Register HDMI colorspace for case of lspcon */
if (intel_bios_is_lspcon_present(dev_priv, port)) {
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c 
b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index b170e272bdee..16bfc59570a5 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -851,6 +851,11 @@ static struct drm_connector 
*intel_dp_add_mst_connector(struct drm_dp_mst_topolo
if (connector->max_bpc_property)
drm_connector_attach_max_bpc_property(connector, 6, 12);
 
+   connector->active_bpc_property =
+   intel_dp->attached_connector->base.active_bpc_property;
+   if (connector->active_bpc_property)
+   drm_connector_attach_active_bpc_property(connector, 6, 12);
+
return connector;
 
 err:
diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c 
b/drivers/gpu/drm/i915/display/intel_hdmi.c
index 7e51c98c475e..9160e21ac9d6 100644
--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
@@ -2513,8 +2513,10 @@ intel_hdmi_add_properties(struct intel_hdmi *intel_hdmi, 
struct drm_connector *c
if (DISPLAY_VER(dev_priv) >= 10)
drm_connector_attach_hdr_output_metadata_property(connector);
 
-   if (!HAS_GMCH(dev_priv))
+   if (!HAS_GMCH(dev_priv)) {
drm_connector_attach_max_bpc_property(connector, 8, 12);
+   drm_connector_attach_active_bpc_property(connector, 8, 12);
+   }
 }
 
 /*
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 02/17] drm/amd/display: Add missing cases convert_dc_color_depth_into_bpc

2021-06-30 Thread Werner Sembach
convert_dc_color_depth_into_bpc() that converts the enum dc_color_depth to
an integer had the casses for COLOR_DEPTH_999 and COLOR_DEPTH_11
missing.

Signed-off-by: Werner Sembach 
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index e081dd3ffb5f..f4abb5f215d1 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -6700,6 +6700,10 @@ static int convert_dc_color_depth_into_bpc (enum 
dc_color_depth display_color_de
return 14;
case COLOR_DEPTH_161616:
return 16;
+   case COLOR_DEPTH_999:
+   return 9;
+   case COLOR_DEPTH_11:
+   return 11;
default:
break;
}
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 00/17] New uAPI drm properties for color management

2021-06-30 Thread Werner Sembach
Implementation of https://lkml.org/lkml/2021/5/12/764 now feature complete
albeit not fully tested.

I have now corrected the DSC behavior, but still no wait to test it.

Exact dithering behavior remains a mistery so in case dithering is active it's
not 100% clear what "active bpc" means, or where the "max bpc" limit is applied.

I have no DP MST splitter at hand. I tried my best to not break anything,
but if one who has one could test it would be very helpful.

Things on my TODO list:
- add "min bpc" property
- rewrite "preferred color format" to "force color format"
- make "Broadcast RGB" only affect RGB on AMD too
- remove unreachable enums of "active/preferred/force color format"


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH v5 04/17] drm/amd/display: Add handling for new "active bpc" property

2021-06-30 Thread Werner Sembach
This commit implements the "active bpc" drm property for the AMD GPU
driver.

Signed-off-by: Werner Sembach 
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 24 ++-
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  4 
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f4abb5f215d1..d984de82ae63 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -7708,8 +7708,10 @@ void amdgpu_dm_connector_init_helper(struct 
amdgpu_display_manager *dm,
adev->mode_info.underscan_vborder_property,
0);
 
-   if (!aconnector->mst_port)
+   if (!aconnector->mst_port) {
drm_connector_attach_max_bpc_property(>base, 8, 16);
+   drm_connector_attach_active_bpc_property(>base, 8, 
16);
+   }
 
/* This defaults to the max in the range, but we want 8bpc for non-edp. 
*/
aconnector->base.state->max_bpc = (connector_type == 
DRM_MODE_CONNECTOR_eDP) ? 16 : 8;
@@ -9078,6 +9080,26 @@ static void amdgpu_dm_atomic_commit_tail(struct 
drm_atomic_state *state)
mutex_unlock(>dc_lock);
}
 
+   /* Extract information from crtc to communicate it to userspace as 
connector properties */
+   for_each_new_connector_in_state(state, connector, new_con_state, i) {
+   struct drm_crtc *crtc = new_con_state->crtc;
+   struct dc_stream_state *stream;
+
+   if (crtc) {
+   new_crtc_state = drm_atomic_get_new_crtc_state(state, 
crtc);
+   dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+   stream = dm_new_crtc_state->stream;
+
+   if (stream)
+   drm_connector_set_active_bpc_property(connector,
+   stream->timing.flags.DSC ?
+   
stream->timing.dsc_cfg.bits_per_pixel / 16 / 3 :
+   convert_dc_color_depth_into_bpc(
+   
stream->timing.display_color_depth));
+   } else
+   drm_connector_set_active_bpc_property(connector, 0);
+   }
+
/* Count number of newly disabled CRTCs for dropping PM refs later. */
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
  new_crtc_state, i) {
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 5568d4e518e6..0cf38743ec47 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -409,6 +409,10 @@ dm_dp_add_mst_connector(struct drm_dp_mst_topology_mgr 
*mgr,
if (connector->max_bpc_property)
drm_connector_attach_max_bpc_property(connector, 8, 16);
 
+   connector->active_bpc_property = master->base.active_bpc_property;
+   if (connector->active_bpc_property)
+   drm_connector_attach_active_bpc_property(>base, 8, 
16);
+
connector->vrr_capable_property = master->base.vrr_capable_property;
if (connector->vrr_capable_property)
drm_connector_attach_vrr_capable_property(connector);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


amdgpu raven2 hang on gst-plugins-base testsuite

2021-06-30 Thread lanodan
Hello,

 Running the gst-plugins-base testsuite makes my Lenovo T459 entirely hang, to
the point where even SSH doesn't works.
 I also get quite random glitches while using mpv, specially HTTP streams (NFS 
is fine)
and browsing the web (webkit-gtk, firefox, chromium, …) but in this case
the driver manages to reset after a bit.

APU: AMD Ryzen 5 3500U Pro w/ Radeon Vega Mobile Gfx
Systems:
- Gentoo Linux (linux 5.10.41 + mesa 21.1.3)
- Alpine Linux edge (linux-lts 5.10.46)
Relevant lspci output:
06:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] 
Picasso (rev d2)
06:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] 
Raven/Raven2/Fenghuang HDMI/DP Audio Controller
06:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h 
(Models 10h-1fh) Platform Security Processor
06:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1
06:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Raven USB 3.1
06:00.5 Multimedia controller: Advanced Micro Devices, Inc. [AMD] 
Raven/Raven2/FireFlight/Renoir Audio Processor
06:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 
10h-1fh) HD Audio Controller


 I had issues with mesa starting on 20.1.10, as seen in 
https://gitlab.freedesktop.org/mesa/mesa/-/issues/4100
but even on that version the gstreamer testsuite makes the display crash, 
the whole system doesn't hangs anymore though but I couldn't recover it,
I can send dmesg and other system logs to developers on request.

btw my desktop (AMD Ryzen 7 3700X + AMD Radeon RX5600XT) has a similar setup
and the gst-plugins-base testsuite just works fine.

Best regards,
Haelwenn (lanodan) Monnier
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread Jay Cornwall
On Wed, Jun 30, 2021, at 05:10, YuBiao Wang wrote:
> [Why]
> GPU timing counters are read via KIQ under sriov, which will introduce
> a delay.
> 
> [How]
> It could be directly read by MMIO.
> 
> v2: Add additional check to prevent carryover issue.
> v3: Only check for carryover for once to prevent performance issue.
> v4: Add comments of the rough frequency where carryover happens.
> 
> Signed-off-by: YuBiao Wang 
> Acked-by: Horace Chen 
> ---
>  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index ff7e9f49040e..9355494002a1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
>  
>  static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
>  {
> - uint64_t clock;
> + uint64_t clock, clock_lo, clock_hi, hi_check;
>  
>   amdgpu_gfx_off_ctrl(adev, false);

This clock can be read with gfxoff enabled.

>   mutex_lock(>gfx.gpu_clock_mutex);

Is the mutex relevant with this clock? It doesn't snapshot like RLC.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread YuBiao Wang
[Why]
GPU timing counters are read via KIQ under sriov, which will introduce
a delay.

[How]
It could be directly read by MMIO.

v2: Add additional check to prevent carryover issue.
v3: Only check for carryover for once to prevent performance issue.
v4: Add comments of the rough frequency where carryover happens.

Signed-off-by: YuBiao Wang 
Acked-by: Horace Chen 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ff7e9f49040e..9355494002a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
 
 static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
 {
-   uint64_t clock;
+   uint64_t clock, clock_lo, clock_hi, hi_check;
 
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(>gfx.gpu_clock_mutex);
@@ -7620,8 +7620,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
amdgpu_device *adev)
((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
break;
default:
-   clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER) |
-   ((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);
+   clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   /* Carryover happens every 4 Giga time cycles counts which is 
roughly 42 secs */
+   if (hi_check != clock_hi) {
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   clock_hi = hi_check;
+   }
+   clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
break;
}
mutex_unlock(>gfx.gpu_clock_mutex);
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v3] drm/radeon: Fix NULL dereference when updating memory stats

2021-06-30 Thread Christian König

Am 24.06.21 um 06:51 schrieb Mikel Rychliski:

radeon_ttm_bo_destroy() is attempting to access the resource object to
update memory counters. However, the resource object is already freed when
ttm calls this function via the destroy callback. This causes an oops when
a bo is freed:

BUG: kernel NULL pointer dereference, address: 0010
RIP: 0010:radeon_ttm_bo_destroy+0x2c/0x100 [radeon]
Call Trace:
 radeon_bo_unref+0x1a/0x30 [radeon]
 radeon_gem_object_free+0x33/0x50 [radeon]
 drm_gem_object_release_handle+0x69/0x70 [drm]
 drm_gem_handle_delete+0x62/0xa0 [drm]
 ? drm_mode_destroy_dumb+0x40/0x40 [drm]
 drm_ioctl_kernel+0xb2/0xf0 [drm]
 drm_ioctl+0x30a/0x3c0 [drm]
 ? drm_mode_destroy_dumb+0x40/0x40 [drm]
 radeon_drm_ioctl+0x49/0x80 [radeon]
 __x64_sys_ioctl+0x8e/0xd0

Avoid the issue by updating the counters in the delete_mem_notify callback
instead. Also, fix memory statistic updating in radeon_bo_move() to
identify the source type correctly. The source type needs to be saved
before the move, because the moved from object may be altered by the move.

Fixes: bfa3357ef9ab ("drm/ttm: allocate resource object instead of embedding it 
v2")
Signed-off-by: Mikel Rychliski 


So, back from vacation. I've reviewed and pushed the patch to drm-misc-next.

Thanks for the help,
Christian.


---
  drivers/gpu/drm/radeon/radeon_object.c | 29 -
  drivers/gpu/drm/radeon/radeon_object.h |  2 +-
  drivers/gpu/drm/radeon/radeon_ttm.c| 13 ++---
  3 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_object.c 
b/drivers/gpu/drm/radeon/radeon_object.c
index bfaaa3c969a3..56ede9d63b12 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -49,23 +49,23 @@ static void radeon_bo_clear_surface_reg(struct radeon_bo 
*bo);
   * function are calling it.
   */
  
-static void radeon_update_memory_usage(struct radeon_bo *bo,

-  unsigned mem_type, int sign)
+static void radeon_update_memory_usage(struct ttm_buffer_object *bo,
+  unsigned int mem_type, int sign)
  {
-   struct radeon_device *rdev = bo->rdev;
+   struct radeon_device *rdev = radeon_get_rdev(bo->bdev);
  
  	switch (mem_type) {

case TTM_PL_TT:
if (sign > 0)
-   atomic64_add(bo->tbo.base.size, >gtt_usage);
+   atomic64_add(bo->base.size, >gtt_usage);
else
-   atomic64_sub(bo->tbo.base.size, >gtt_usage);
+   atomic64_sub(bo->base.size, >gtt_usage);
break;
case TTM_PL_VRAM:
if (sign > 0)
-   atomic64_add(bo->tbo.base.size, >vram_usage);
+   atomic64_add(bo->base.size, >vram_usage);
else
-   atomic64_sub(bo->tbo.base.size, >vram_usage);
+   atomic64_sub(bo->base.size, >vram_usage);
break;
}
  }
@@ -76,8 +76,6 @@ static void radeon_ttm_bo_destroy(struct ttm_buffer_object 
*tbo)
  
  	bo = container_of(tbo, struct radeon_bo, tbo);
  
-	radeon_update_memory_usage(bo, bo->tbo.resource->mem_type, -1);

-
mutex_lock(>rdev->gem.mutex);
list_del_init(>list);
mutex_unlock(>rdev->gem.mutex);
@@ -727,24 +725,21 @@ int radeon_bo_check_tiling(struct radeon_bo *bo, bool 
has_moved,
  }
  
  void radeon_bo_move_notify(struct ttm_buffer_object *bo,

-  bool evict,
+  unsigned int old_type,
   struct ttm_resource *new_mem)
  {
struct radeon_bo *rbo;
  
+	radeon_update_memory_usage(bo, old_type, -1);

+   if (new_mem)
+   radeon_update_memory_usage(bo, new_mem->mem_type, 1);
+
if (!radeon_ttm_bo_is_radeon_bo(bo))
return;
  
  	rbo = container_of(bo, struct radeon_bo, tbo);

radeon_bo_check_tiling(rbo, 0, 1);
radeon_vm_bo_invalidate(rbo->rdev, rbo);
-
-   /* update statistics */
-   if (!new_mem)
-   return;
-
-   radeon_update_memory_usage(rbo, bo->resource->mem_type, -1);
-   radeon_update_memory_usage(rbo, new_mem->mem_type, 1);
  }
  
  vm_fault_t radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo)

diff --git a/drivers/gpu/drm/radeon/radeon_object.h 
b/drivers/gpu/drm/radeon/radeon_object.h
index 1739c6a142cd..1afc7992ef91 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -161,7 +161,7 @@ extern void radeon_bo_get_tiling_flags(struct radeon_bo *bo,
  extern int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved,
bool force_drop);
  extern void radeon_bo_move_notify(struct ttm_buffer_object *bo,
-

Re: AMDGPU error: "[drm:amdgpu_dm_atomic_commit_tail [amdgpu]] *ERROR* Waiting for fences timed out!"

2021-06-30 Thread Ketsui
>I could be wrong. I can't remember what marketing names map to what
>asics. I can tell if you can get your dmesg output.
Here it is.


dmesg
Description: Binary data
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH V2] treewide: Add missing semicolons to __assign_str uses

2021-06-30 Thread Steven Rostedt
On Wed, 30 Jun 2021 04:28:39 -0700
Joe Perches  wrote:

> On Sat, 2021-06-12 at 08:42 -0700, Joe Perches wrote:
> > The __assign_str macro has an unusual ending semicolon but the vast
> > majority of uses of the macro already have semicolon termination.  
> 
> ping?
> 

I wasn't sure I was the one to take this. I can, as I can run tests on
it as well. I have some last minute fixes sent to me on something else,
and I can apply this along with them.

-- Steve
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH V2] treewide: Add missing semicolons to __assign_str uses

2021-06-30 Thread Joe Perches
On Sat, 2021-06-12 at 08:42 -0700, Joe Perches wrote:
> The __assign_str macro has an unusual ending semicolon but the vast
> majority of uses of the macro already have semicolon termination.

ping?


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amdgpu: use xarray for storing pasid in vm

2021-06-30 Thread Das, Nirmoy


On 6/29/2021 7:40 PM, Christian König wrote:



Am 29.06.21 um 17:19 schrieb Nirmoy Das:

Replace idr with xarray as we actually need hash functionality.
Cleanup code related to vm pasid by adding helper function.

Signed-off-by: Nirmoy Das 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 149 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |   6 +-
  2 files changed, 73 insertions(+), 82 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 63975bda8e76..fd92ff27931a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -87,6 +87,46 @@ struct amdgpu_prt_cb {
  struct dma_fence_cb cb;
  };
  +/**
+ * amdgpu_vm_set_pasid - manage pasid and vm ptr mapping
+ *
+ * @adev: amdgpu_device pointer
+ * @vm: amdgpu_vm pointer
+ * @pasid: requested pasid


Better write "the pasid the VM is using on this GPU".


+ *
+ * Each pasid associats with a vm pointer.


That is misleading. KFD most likely want to associate the same pasid 
with multiple VMs on different GPUs.


Better write "Set the pasid this VM is using on this GPU, can also be 
used to remove the pasid by passing in zero.".



OK, thanks for fixing it, I will update.




  This function can be use to
+ * create a new pasid,vm association or to remove an existing one. 
To remove an

+ * existing pasid,vm association, pass 0 as @pasid.
+ */
+int amdgpu_vm_set_pasid(struct amdgpu_device *adev, struct amdgpu_vm 
*vm,

+    unsigned long pasid)


"unsigned long pasid"? The pasid is either 16 or 20 bits IIRC.



I kept it as xarray's index. I will change it to u32 then.


Thanks,

Nirmoy



Regards,
Christian.


+{
+    int r;
+
+    if (vm->pasid == pasid)
+    return 0;
+
+    if (vm->pasid) {
+    r = xa_err(xa_erase_irq(>vm_manager.pasids, vm->pasid));
+    if (r < 0)
+    return r;
+
+    vm->pasid = 0;
+    }
+
+    if (pasid) {
+    r = xa_err(xa_store_irq(>vm_manager.pasids, pasid, vm,
+    GFP_KERNEL));
+    if (r < 0)
+    return r;
+
+    vm->pasid = pasid;
+    }
+
+
+    return 0;
+}
+
  /*
   * vm eviction_lock can be taken in MMU notifiers. Make sure no 
reclaim-FS

   * happens while holding this lock anywhere to prevent deadlocks when
@@ -2940,18 +2980,9 @@ int amdgpu_vm_init(struct amdgpu_device *adev, 
struct amdgpu_vm *vm, u32 pasid)

    amdgpu_bo_unreserve(vm->root.bo);
  -    if (pasid) {
-    unsigned long flags;
-
-    spin_lock_irqsave(>vm_manager.pasid_lock, flags);
-    r = idr_alloc(>vm_manager.pasid_idr, vm, pasid, pasid 
+ 1,

-  GFP_ATOMIC);
- spin_unlock_irqrestore(>vm_manager.pasid_lock, flags);
-    if (r < 0)
-    goto error_free_root;
-
-    vm->pasid = pasid;
-    }
+    r = amdgpu_vm_set_pasid(adev, vm, pasid);
+    if (r)
+    goto error_free_root;
    INIT_KFIFO(vm->faults);
  @@ -3039,18 +3070,15 @@ int amdgpu_vm_make_compute(struct 
amdgpu_device *adev, struct amdgpu_vm *vm,

  if (r)
  goto unreserve_bo;
  -    if (pasid) {
-    unsigned long flags;
-
-    spin_lock_irqsave(>vm_manager.pasid_lock, flags);
-    r = idr_alloc(>vm_manager.pasid_idr, vm, pasid, pasid 
+ 1,

-  GFP_ATOMIC);
- spin_unlock_irqrestore(>vm_manager.pasid_lock, flags);
+    /* Free the original amdgpu allocated pasid,
+ * will be replaced with kfd allocated pasid.
+ */
+    if (vm->pasid)
+    amdgpu_pasid_free(vm->pasid);
  -    if (r == -ENOSPC)
-    goto unreserve_bo;
-    r = 0;
-    }
+    r = amdgpu_vm_set_pasid(adev, vm, pasid);
+    if (r)
+    goto unreserve_bo;
    /* Check if PD needs to be reinitialized and do it before
   * changing any other state, in case it fails.
@@ -3061,7 +3089,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device 
*adev, struct amdgpu_vm *vm,

 to_amdgpu_bo_vm(vm->root.bo),
 false);
  if (r)
-    goto free_idr;
+    goto free_pasid_entry;
  }
    /* Update VM state */
@@ -3078,7 +3106,7 @@ int amdgpu_vm_make_compute(struct amdgpu_device 
*adev, struct amdgpu_vm *vm,

  r = amdgpu_bo_sync_wait(vm->root.bo,
  AMDGPU_FENCE_OWNER_UNDEFINED, true);
  if (r)
-    goto free_idr;
+    goto free_pasid_entry;
    vm->update_funcs = _vm_cpu_funcs;
  } else {
@@ -3088,36 +3116,13 @@ int amdgpu_vm_make_compute(struct 
amdgpu_device *adev, struct amdgpu_vm *vm,

  vm->last_update = NULL;
  vm->is_compute_context = true;
  -    if (vm->pasid) {
-    unsigned long flags;
-
-    spin_lock_irqsave(>vm_manager.pasid_lock, flags);
-    idr_remove(>vm_manager.pasid_idr, vm->pasid);
- spin_unlock_irqrestore(>vm_manager.pasid_lock, flags);
-
-    /* Free the original amdgpu allocated pasid
- * Will be replaced with kfd allocated pasid

Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay (v4)

2021-06-30 Thread Christian König

Am 30.06.21 um 12:10 schrieb YuBiao Wang:

[Why]
GPU timing counters are read via KIQ under sriov, which will introduce
a delay.

[How]
It could be directly read by MMIO.

v2: Add additional check to prevent carryover issue.
v3: Only check for carryover for once to prevent performance issue.
v4: Add comments of the rough frequency where carryover happens.

Signed-off-by: YuBiao Wang 
Acked-by: Horace Chen 
---
  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 ++---
  1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ff7e9f49040e..9355494002a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -7609,7 +7609,7 @@ static int gfx_v10_0_soft_reset(void *handle)
  
  static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)

  {
-   uint64_t clock;
+   uint64_t clock, clock_lo, clock_hi, hi_check;
  
  	amdgpu_gfx_off_ctrl(adev, false);

mutex_lock(>gfx.gpu_clock_mutex);
@@ -7620,8 +7620,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
amdgpu_device *adev)
((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
break;
default:
-   clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER) |
-   ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) 
<< 32ULL);


If you want to be extra sure you could add a preempt_disable(); here.


+   clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   /* Carryover happens every 4 Giga time cycles counts which is 
roughly 42 secs */
+   if (hi_check != clock_hi) {
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   clock_hi = hi_check;
+   }


And a preempt_enable(); here. This way the critical section is also not 
interrupted by a task switch.


But either way the patch is Reviewed-by: Christian König 



Regards,
Christian.


+   clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
break;
}
mutex_unlock(>gfx.gpu_clock_mutex);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 6/8] drm/amdgpu: add helper function to query psp runtime db entry (v2)

2021-06-30 Thread Michel Dänzer
On 2021-06-11 9:05 a.m., Hawking Zhang wrote:
> PSP will dump various boot up information into a
> portion of local frame buffer, called runtime database.
> The helper function is used for driver to query those
> shared information.
> 
> v2: init ret and check !ret to exit loop as soon as
> found the entry
> 
> Signed-off-by: Hawking Zhang 
> Reviewed-by: John Clements 
> Reviewed-by: Lijo Lazar 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 68 +
>  1 file changed, 68 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> index ab4e89186186..dc786c91ec9d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> @@ -164,6 +164,74 @@ static int psp_memory_training_init(struct psp_context 
> *psp)
>   return ret;
>  }
>  
> +/*
> + * Helper funciton to query psp runtime database entry
> + *
> + * @adev: amdgpu_device pointer
> + * @entry_type: the type of psp runtime database entry
> + * @db_entry: runtime database entry pointer
> + *
> + * Return false if runtime database doesn't exit or entry is invalid
> + * or true if the specific database entry is found, and copy to @db_entry
> + */
> +static bool psp_get_runtime_db_entry(struct amdgpu_device *adev,
> +  enum psp_runtime_entry_type entry_type,
> +  void *db_entry)
> +{
> + uint64_t db_header_pos, db_dir_pos;
> + struct psp_runtime_data_header db_header = {0};
> + struct psp_runtime_data_directory db_dir = {0};
> + bool ret = false;
> + int i;
> +
> + db_header_pos = adev->gmc.mc_vram_size - PSP_RUNTIME_DB_OFFSET;
> + db_dir_pos = db_header_pos + sizeof(struct psp_runtime_data_header);
> +
> + /* read runtime db header from vram */
> + amdgpu_device_vram_access(adev, db_header_pos, (uint32_t *)_header,
> + sizeof(struct psp_runtime_data_header), false);
> +
> + if (db_header.cookie != PSP_RUNTIME_DB_COOKIE_ID) {
> + /* runtime db doesn't exist, exit */
> + dev_warn(adev->dev, "PSP runtime database doesn't exist\n");
> + return false;
> + }

I just noticed this message in the output of

 dmesg -l emerg,alert,crit,err,warn

I wonder if these messages really need to be printed by default at all, let 
alone at warning level. Do they indicate an issue which needs to be addressed?


> + /* read runtime database entry from vram */
> + amdgpu_device_vram_access(adev, db_dir_pos, (uint32_t *)_dir,
> + sizeof(struct psp_runtime_data_directory), false);
> +
> + if (db_dir.entry_count >= PSP_RUNTIME_DB_DIAG_ENTRY_MAX_COUNT) {
> + /* invalid db entry count, exit */
> + dev_warn(adev->dev, "Invalid PSP runtime database entry 
> count\n");
> + return false;
> + }


-- 
Earthling Michel Dänzer   |   https://redhat.com
Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: Make noretry the default on yellow carp

2021-06-30 Thread Huang, Ray
[AMD Official Use Only]

Reviewed-by: Huang Rui 

-Original Message-
From: Liu, Aaron  
Sent: Wednesday, June 30, 2021 1:57 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander ; Huang, Ray 
; Gong, Curry ; Kuehling, Felix 
; Liu, Aaron ; Huang, Ray 

Subject: [PATCH] drm/amdgpu: Make noretry the default on yellow carp

From: chen gong 

Performing kfd page fault tests on the yellow carp platform will fail.

>From the scan data after the failure, it can be found that a nack=0x1(retry 
>fault) is returned. but we did not enable the interrupts for retry faults in 
>the code.

So we need to set noretry = 1 like the above ASICs.

Signed-off-by: chen gong 
Reviewed-by: Aaron Liu 
Acked-by: Huang Rui 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 1525d957e114..d4e9704dec62 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -591,6 +591,7 @@ void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
case CHIP_ARCTURUS:
case CHIP_ALDEBARAN:
case CHIP_BEIGE_GOBY:
+   case CHIP_YELLOW_CARP:
/*
 * noretry = 0 will cause kfd page fault tests fail
 * for some ASICs, so set default to 1 for these ASICs.
--
2.25.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[PATCH] drm/amdgpu: Make noretry the default on yellow carp

2021-06-30 Thread Aaron Liu
From: chen gong 

Performing kfd page fault tests on the yellow carp platform will fail.

>From the scan data after the failure, it can be found that a
nack=0x1(retry fault) is returned. but we did not enable the interrupts
for retry faults in the code.

So we need to set noretry = 1 like the above ASICs.

Signed-off-by: chen gong 
Reviewed-by: Aaron Liu 
Acked-by: Huang Rui 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 1525d957e114..d4e9704dec62 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -591,6 +591,7 @@ void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
case CHIP_ARCTURUS:
case CHIP_ALDEBARAN:
case CHIP_BEIGE_GOBY:
+   case CHIP_YELLOW_CARP:
/*
 * noretry = 0 will cause kfd page fault tests fail
 * for some ASICs, so set default to 1 for these ASICs.
-- 
2.25.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v4 03/17] drm/uAPI: Add "active bpc" as feedback channel for "max bpc" drm property

2021-06-30 Thread Werner Sembach



Am 30.06.21 um 10:21 schrieb Pekka Paalanen:

On Tue, 29 Jun 2021 13:02:05 +0200
Werner Sembach  wrote:


Am 28.06.21 um 19:03 schrieb Werner Sembach:

Am 18.06.21 um 11:11 schrieb Werner Sembach:

Add a new general drm property "active bpc" which can be used by graphic
drivers to report the applied bit depth per pixel back to userspace.

While "max bpc" can be used to change the color depth, there was no way to
check which one actually got used. While in theory the driver chooses the
best/highest color depth within the max bpc setting a user might not be
fully aware what his hardware is or isn't capable off. This is meant as a
quick way to double check the setup.

In the future, automatic color calibration for screens might also depend on
this information being available.

Signed-off-by: Werner Sembach 
---
  drivers/gpu/drm/drm_connector.c | 51 +
  include/drm/drm_connector.h |  8 ++
  2 files changed, 59 insertions(+)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index da39e7ff6965..943f6b61053b 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -1197,6 +1197,14 @@ static const struct drm_prop_enum_list dp_colorspaces[] 
= {
   *drm_connector_attach_max_bpc_property() to create and attach the
   *property to the connector during initialization.
   *
+ * active bpc:
+ * This read-only range property tells userspace the pixel color bit depth
+ * actually used by the hardware display engine on "the cable" on a
+ * connector. The chosen value depends on hardware capabilities, both
+ * display engine and connected monitor, and the "max bpc" property.
+ * Drivers shall use drm_connector_attach_active_bpc_property() to install
+ * this property.
+ *

Regarding "on the cable" and dithering: As far as I can tell, what the 
dithering option does, is setting a hardware
register here:

- 
https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/i915/display/intel_display.c#L4534

- 
https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/i915/display/intel_display.c#L4571

So dithering seems to be calculated by fixed purpose hardware/firmware outside 
of the driver?

The Intel driver does not seem to set a target bpc/bpp for this hardware so I 
guess it defaults to 6 or 8 bpc?

Never mind it does. This switch-case does affect the dithering output:
https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/i915/display/intel_display.c#L4537

Hi,

I obviously do not know the intel driver or hardware at all, but
to me that just looks like translating from bits per pixel to bits per
channel in RGB mapping?
No, if i understand the documentation correctly: Writing bit depth here 
with dithering enabled sets the dithering target bpc.



As found in this documentation p.548:
https://01.org/sites/default/files/documentation/intel-gfx-prm-osrc-lkf-vol02c-commandreference-registers-part2.pdf

So max bpc and active bpc are affecting/affected by the bpc after dithering.

By definition, if the cable carries N bpc, then dithering does not
change that. The cable still carries N bpc, but due to spatial or
temporal dithering, the *observed* color resolution may or may not be
higher than the cable bpc.
Yes, and max bpc and active bpc tell the cable bpc ist not the 
*observed* bpc.


Of course, if the cable bpc is 8, and dithering targets 6 bpc, then 2
LSB on the cable are always zero, right?
I would assume that in this case only 6 bpc are actually send? Isn't the 
whole thing of dithering that you can't send, for example, 8 bpc?


Maybe one would want to do that if the monitor has a 6 bit panel and it
simply ignored the 2 LSB, and the cable cannot go down to 6 bpc.


Is there dithering actually doing this? aka is my assumption above wrong?

AMD code that confused me before, is hinting that you might be right: 
https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c#L826


there is a set_clamp depth and a separate DCP_SPATIAL_DITHER_DEPTH_30BPP



So, what does "max bpc" mean right now?

It seems like dither on/off is insufficient information, one would also
need to control the dithering target bpc. I suppose the driver has a
policy on how it chooses the target bpc, but what is that policy? Is
the dither target bpc the cable bpc or the sink bpc?

Needless to say, I'm quite confused.


... We need someone who knows what dithering on intel and amd gpu 
actually means.


But I don't want this to become a blocker for this patchset, because if 
there is no dithering, which seems to be the norm, the active bpc 
property is already really usefull as it is. So add a note to the docs 
that the value might be invalid when dithering is active for now?





Thanks,
pq


Similar things happen on amd. Here the output dither depth seems to be written 
to a fixed value however:

- 

Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data retrieving for Sienna Cichlid

2021-06-30 Thread Lazar, Lijo




On 6/30/2021 10:01 AM, Quan, Evan wrote:

[AMD Official Use Only]




-Original Message-
From: Lazar, Lijo 
Sent: Wednesday, June 30, 2021 11:52 AM
To: Quan, Evan ; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander 
Subject: Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data
retrieving for Sienna Cichlid



On 6/30/2021 8:56 AM, Quan, Evan wrote:

[AMD Official Use Only]




-Original Message-
From: Lazar, Lijo 
Sent: Tuesday, June 29, 2021 9:38 PM
To: Quan, Evan ; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander 
Subject: Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data
retrieving for Sienna Cichlid



On 6/25/2021 1:42 PM, Evan Quan wrote:

Due to the structure layout change: "uint32_t ThrottlerStatus" -> "
uint8_t  ThrottlingPercentage[THROTTLER_COUNT]".

Change-Id: Id5c148b0584d972ae73fb9d7347a312944cec13d
Signed-off-by: Evan Quan 
---
.../pm/inc/smu11_driver_if_sienna_cichlid.h   |  63 -
.../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 234

---

---

2 files changed, 222 insertions(+), 75 deletions(-)

diff --git
a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
index 61c87c39be80..0b916a1933df 100644
--- a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
+++ b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
@@ -211,6 +211,7 @@ typedef enum {
#define THROTTLER_FIT_BIT  17
#define THROTTLER_PPM_BIT  18
#define THROTTLER_APCC_BIT 19
+#define THROTTLER_COUNT20

// FW DState Features Control Bits
// FW DState Features Control Bits
@@ -1406,7 +1407,67 @@ typedef struct {
} SmuMetrics_t;

typedef struct {
-  SmuMetrics_t SmuMetrics;
+  uint32_t CurrClock[PPCLK_COUNT];
+
+  uint16_t AverageGfxclkFrequencyPreDs;  uint16_t
+ AverageGfxclkFrequencyPostDs;  uint16_t

AverageFclkFrequencyPreDs;

+ uint16_t AverageFclkFrequencyPostDs;  uint16_t
+ AverageUclkFrequencyPreDs  ;  uint16_t

AverageUclkFrequencyPostDs  ;

+
+
+  uint16_t AverageGfxActivity;
+  uint16_t AverageUclkActivity   ;
+  uint8_t  CurrSocVoltageOffset  ;
+  uint8_t  CurrGfxVoltageOffset  ;
+  uint8_t  CurrMemVidOffset  ;
+  uint8_t  Padding8;
+  uint16_t AverageSocketPower;
+  uint16_t TemperatureEdge   ;
+  uint16_t TemperatureHotspot;
+  uint16_t TemperatureMem;
+  uint16_t TemperatureVrGfx  ;
+  uint16_t TemperatureVrMem0 ;
+  uint16_t TemperatureVrMem1 ;
+  uint16_t TemperatureVrSoc  ;
+  uint16_t TemperatureLiquid0;
+  uint16_t TemperatureLiquid1;
+  uint16_t TemperaturePlx;
+  uint16_t Padding16 ;
+  uint32_t AccCnt;
+  uint8_t  ThrottlingPercentage[THROTTLER_COUNT];
+
+
+  uint8_t  LinkDpmLevel;
+  uint8_t  CurrFanPwm;
+  uint16_t CurrFanSpeed;
+
+  //BACO metrics, PMFW-1721
+  //metrics for D3hot entry/exit and driver ARM msgs  uint8_t
+ D3HotEntryCountPerMode[D3HOT_SEQUENCE_COUNT];
+  uint8_t D3HotExitCountPerMode[D3HOT_SEQUENCE_COUNT];
+  uint8_t ArmMsgReceivedCountPerMode[D3HOT_SEQUENCE_COUNT];
+
+  //PMFW-4362
+  uint32_t EnergyAccumulator;
+  uint16_t AverageVclk0Frequency  ;
+  uint16_t AverageDclk0Frequency  ;
+  uint16_t AverageVclk1Frequency  ;
+  uint16_t AverageDclk1Frequency  ;
+  uint16_t VcnActivityPercentage  ; //place holder, David N. to provide

full

sequence

+  uint8_t  PcieRate   ;
+  uint8_t  PcieWidth  ;
+  uint16_t AverageGfxclkFrequencyTarget;  uint16_t Padding16_2;
+
+} SmuMetrics_V2_t;
+
+typedef struct {
+  union {
+SmuMetrics_t SmuMetrics;
+SmuMetrics_V2_t SmuMetrics_V2;
+  };
  uint32_t Spare[1];

  // Padding - ignore
diff --git

a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c

b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
index 0c3407025eb2..f882c6756bf0 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
@@ -80,6 +80,13 @@
(*member) = (smu->smu_table.driver_pptable +

offsetof(PPTable_t, field));\

} while(0)

+#define GET_METRICS_MEMBER(field, member) do { \
+   if ((smu->adev->asic_type == CHIP_SIENNA_CICHLID) && (smu-
smc_fw_version >= 0x3A4300)) \
+   (*member) = ((void *)(&(((SmuMetricsExternal_t *)(smu-
smu_table.metrics_table))->SmuMetrics_V2)) +

offsetof(SmuMetrics_V2_t,

field)); \

+   else \
+   (*member) = ((void *)(&(((SmuMetricsExternal_t
+*)(smu->smu_table.metrics_table))->SmuMetrics)) +
+offsetof(SmuMetrics_t, field)); \ } while(0)
+
static int get_table_size(struct smu_context *smu)
{
if (smu->adev->asic_type == CHIP_BEIGE_GOBY) @@ -489,13

+496,33 @@

static int sienna_cichlid_tables_init(struct smu_context *smu)
return -ENOMEM;
}

+static uint32_t sienna_cichlid_get_throttler_status_locked(struct
+smu_context *smu) {
+   struct smu_table_context 

Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay

2021-06-30 Thread Christian König

Am 29.06.21 um 12:01 schrieb YuBiao Wang:

[Why]
GPU timing counters are read via KIQ under sriov, which will introduce
a delay.

[How]
It could be directly read by MMIO.

v2: Add additional check to prevent carryover issue.
v3: Only check for carryover for once to prevent performance issue.

Signed-off-by: YuBiao Wang 
Acked-by: Horace Chen 
---
  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 12 ++--
  1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ff7e9f49040e..82a5b7ab8dc5 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -7610,6 +7610,7 @@ static int gfx_v10_0_soft_reset(void *handle)
  static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
  {
uint64_t clock;
+   uint64_t clock_lo, clock_hi, hi_check;


You could put that on one line.

  
  	amdgpu_gfx_off_ctrl(adev, false);

mutex_lock(>gfx.gpu_clock_mutex);
@@ -7620,8 +7621,15 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
amdgpu_device *adev)
((uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
break;
default:
-   clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER) |
-   ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) 
<< 32ULL);
+   clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_UPPER);
+   // If carryover happens, update lower count again.


That is obvious and doesn't need a comment, but what you should comment 
is how unlikely a carry over is.


E.g. something like carry over only happens every x clock cycles which 
are roughly ~y days/weeks/month etc...


And as Monk noted as well please use kernel style comments.

Regards,
Christian.


+   if (hi_check != clock_hi) {
+   clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
mmGOLDEN_TSC_COUNT_LOWER);
+   clock_hi = hi_check;
+   }
+   clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
break;
}
mutex_unlock(>gfx.gpu_clock_mutex);


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH v4 12/17] drm/uAPI: Add "preferred color format" drm property as setting for userspace

2021-06-30 Thread Werner Sembach

Am 30.06.21 um 10:41 schrieb Pekka Paalanen:


On Tue, 29 Jun 2021 13:39:18 +0200
Werner Sembach  wrote:


Am 29.06.21 um 13:17 schrieb Pekka Paalanen:

On Tue, 29 Jun 2021 08:12:54 +
Simon Ser  wrote:
  

On Tuesday, June 22nd, 2021 at 09:15, Pekka Paalanen  
wrote:
  

yes, I think this makes sense, even if it is a property that one can't
tell for sure what it does before hand.

Using a pair of properties, preference and active, to ask for something
and then check what actually worked is good for reducing the
combinatorial explosion caused by needing to "atomic TEST_ONLY commit"
test different KMS configurations. Userspace has a better chance of
finding a configuration that is possible.

OTOH, this has the problem than in UI one cannot tell the user in
advance which options are truly possible. Given that KMS properties are
rarely completely independent, and in this case known to depend on
several other KMS properties, I think it is good enough to know after
the fact.

If a driver does not use what userspace prefers, there is no way to
understand why, or what else to change to make it happen. That problem
exists anyway, because TEST_ONLY commits do not give useful feedback
but only a yes/no.

By submitting incremental atomic reqs with TEST_ONLY (i.e. only changing one
property at a time), user-space can discover which property makes the atomic
commit fail.

That works if the properties are independent of each other. Color
range, color format, bpc and more may all be interconnected,
allowing only certain combinations to work.

If all these properties have "auto" setting too, then it would be
possible to probe each property individually, but that still does not
tell which combinations are valid.

If you probe towards a certain configuration by setting the properties
one by one, then depending on the order you pick the properties, you
may come to a different conclusion on which property breaks the
configuration.

My mind crossed another point that must be considered: When plugin in
a Monitor a list of possible Resolutions+Framerate combinations is
created for xrandr and other userspace (I guess by atomic checks? but
I don't know).

Hi,

I would not think so, but I hope to be corrected if I'm wrong.

My belief is that the driver collects a list of modes from EDID, some
standard modes, and maybe some other hardcoded modes, and then
validates each entry against all the known limitations like vertical
and horizontal frequency limits, discarding modes that do not fit.

Not all limitations are known during that phase, which is why KMS
property "link-status" exists. When userspace actually programs a mode
(not a TEST_ONLY commit), the link training may fail. The kernel prunes
the mode from the list and sets the link status property to signal
failure, and sends a hotplug uevent. Userspace needs to re-check the
mode list and try again.

That is a generic escape hatch for when TEST_ONLY commit succeeds, but
in reality the hardware cannot do it, you just cannot know until you
actually try for real. It causes end user visible flicker if it happens
on an already running connector, but since it usually happens when
turning a connector on to begin with, there is no flicker to be seen,
just a small delay in finding a mode that works.


During this drm
properties are already considered, which is no problem atm because as
far as i can tell there is currently no drm property that would make
a certain Resolutions+Framerate combination unreachable that would be
possible with everything on default.

I would not expect KMS properties to be considered at all. It would
reject modes that are actually possible if the some KMS properties were
changed. So at least going forward, current KMS property values cannot
factor in.


At least the debugfs variable "force_yuv420_output" did change the 
available modes here: 
https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c#L5165 
before my patch 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=68eb3ae3c63708f823aeeb63bb15197c727bd9bf


Forcing a color format via a DRM property in this function would 
reintroduce the problem.


And I think i915 driver works similar in this regard.




However for example forcing YCbCr420 encoding would limit the
available resolutions (my screen for example only supports YCbCr420
on 4k@60 and @50Hz and on no other resolution or frequency (native is
2560x1440@144Hz).

So would a "force color format" that does not get resetted on
repluging/reenabling a monitor break the output, for example, of an
not updated xrandr, unaware of this new property?

Yes, not because the mode list would be missing the mode, but because
actually setting the mode would fail.
Well, like described above, I think the mode would actually be missing, 
which is also an unexpected behavior from a user perspective.


RandR in particular is problematic, because it does not actually
understand any KMS properties, it 

Re: [PATCH v4 12/17] drm/uAPI: Add "preferred color format" drm property as setting for userspace

2021-06-30 Thread Pekka Paalanen
On Tue, 29 Jun 2021 13:39:18 +0200
Werner Sembach  wrote:

> Am 29.06.21 um 13:17 schrieb Pekka Paalanen:
> > On Tue, 29 Jun 2021 08:12:54 +
> > Simon Ser  wrote:
> >  
> >> On Tuesday, June 22nd, 2021 at 09:15, Pekka Paalanen  
> >> wrote:
> >>  
> >>> yes, I think this makes sense, even if it is a property that one can't
> >>> tell for sure what it does before hand.
> >>>
> >>> Using a pair of properties, preference and active, to ask for something
> >>> and then check what actually worked is good for reducing the
> >>> combinatorial explosion caused by needing to "atomic TEST_ONLY commit"
> >>> test different KMS configurations. Userspace has a better chance of
> >>> finding a configuration that is possible.
> >>>
> >>> OTOH, this has the problem than in UI one cannot tell the user in
> >>> advance which options are truly possible. Given that KMS properties are
> >>> rarely completely independent, and in this case known to depend on
> >>> several other KMS properties, I think it is good enough to know after
> >>> the fact.
> >>>
> >>> If a driver does not use what userspace prefers, there is no way to
> >>> understand why, or what else to change to make it happen. That problem
> >>> exists anyway, because TEST_ONLY commits do not give useful feedback
> >>> but only a yes/no.
> >> By submitting incremental atomic reqs with TEST_ONLY (i.e. only changing 
> >> one
> >> property at a time), user-space can discover which property makes the 
> >> atomic
> >> commit fail.  
> > That works if the properties are independent of each other. Color
> > range, color format, bpc and more may all be interconnected,
> > allowing only certain combinations to work.
> >
> > If all these properties have "auto" setting too, then it would be
> > possible to probe each property individually, but that still does not
> > tell which combinations are valid.
> >
> > If you probe towards a certain configuration by setting the properties
> > one by one, then depending on the order you pick the properties, you
> > may come to a different conclusion on which property breaks the
> > configuration.  
> 
> My mind crossed another point that must be considered: When plugin in
> a Monitor a list of possible Resolutions+Framerate combinations is
> created for xrandr and other userspace (I guess by atomic checks? but
> I don't know).

Hi,

I would not think so, but I hope to be corrected if I'm wrong.

My belief is that the driver collects a list of modes from EDID, some
standard modes, and maybe some other hardcoded modes, and then
validates each entry against all the known limitations like vertical
and horizontal frequency limits, discarding modes that do not fit.

Not all limitations are known during that phase, which is why KMS
property "link-status" exists. When userspace actually programs a mode
(not a TEST_ONLY commit), the link training may fail. The kernel prunes
the mode from the list and sets the link status property to signal
failure, and sends a hotplug uevent. Userspace needs to re-check the
mode list and try again.

That is a generic escape hatch for when TEST_ONLY commit succeeds, but
in reality the hardware cannot do it, you just cannot know until you
actually try for real. It causes end user visible flicker if it happens
on an already running connector, but since it usually happens when
turning a connector on to begin with, there is no flicker to be seen,
just a small delay in finding a mode that works.

> During this drm
> properties are already considered, which is no problem atm because as
> far as i can tell there is currently no drm property that would make
> a certain Resolutions+Framerate combination unreachable that would be
> possible with everything on default.

I would not expect KMS properties to be considered at all. It would
reject modes that are actually possible if the some KMS properties were
changed. So at least going forward, current KMS property values cannot
factor in.

> However for example forcing YCbCr420 encoding would limit the
> available resolutions (my screen for example only supports YCbCr420
> on 4k@60 and @50Hz and on no other resolution or frequency (native is
> 2560x1440@144Hz).
> 
> So would a "force color format" that does not get resetted on
> repluging/reenabling a monitor break the output, for example, of an
> not updated xrandr, unaware of this new property?

Yes, not because the mode list would be missing the mode, but because
actually setting the mode would fail.

RandR in particular is problematic, because it does not actually
understand any KMS properties, it is merely a relay. So anything
that *uses* RandR protocol or xrandr command would also need to be
patched to understand the new properties.

The kernel automatically resetting *some* properties in *some*
occasions seems really fragile and complicated to me, which is why I'm
a lot more keen to see a "reset everything to sensible defaults"
generic mechanism added to KMS.


Thanks,
pq


pgpOoTCWjvbOG.pgp
Description: 

RE: [PATCH 1/2] drm/amd/pm: update the gpu metrics data retrieving for Sienna Cichlid

2021-06-30 Thread Quan, Evan
[AMD Official Use Only]



> -Original Message-
> From: Lazar, Lijo 
> Sent: Wednesday, June 30, 2021 11:52 AM
> To: Quan, Evan ; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander 
> Subject: Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data
> retrieving for Sienna Cichlid
> 
> 
> 
> On 6/30/2021 8:56 AM, Quan, Evan wrote:
> > [AMD Official Use Only]
> >
> >
> >
> >> -Original Message-
> >> From: Lazar, Lijo 
> >> Sent: Tuesday, June 29, 2021 9:38 PM
> >> To: Quan, Evan ; amd-gfx@lists.freedesktop.org
> >> Cc: Deucher, Alexander 
> >> Subject: Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data
> >> retrieving for Sienna Cichlid
> >>
> >>
> >>
> >> On 6/25/2021 1:42 PM, Evan Quan wrote:
> >>> Due to the structure layout change: "uint32_t ThrottlerStatus" -> "
> >>> uint8_t  ThrottlingPercentage[THROTTLER_COUNT]".
> >>>
> >>> Change-Id: Id5c148b0584d972ae73fb9d7347a312944cec13d
> >>> Signed-off-by: Evan Quan 
> >>> ---
> >>>.../pm/inc/smu11_driver_if_sienna_cichlid.h   |  63 -
> >>>.../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 234
> ---
> >> ---
> >>>2 files changed, 222 insertions(+), 75 deletions(-)
> >>>
> >>> diff --git
> >>> a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> >>> b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> >>> index 61c87c39be80..0b916a1933df 100644
> >>> --- a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> >>> +++ b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> >>> @@ -211,6 +211,7 @@ typedef enum {
> >>>#define THROTTLER_FIT_BIT  17
> >>>#define THROTTLER_PPM_BIT  18
> >>>#define THROTTLER_APCC_BIT 19
> >>> +#define THROTTLER_COUNT20
> >>>
> >>>// FW DState Features Control Bits
> >>>// FW DState Features Control Bits
> >>> @@ -1406,7 +1407,67 @@ typedef struct {
> >>>} SmuMetrics_t;
> >>>
> >>>typedef struct {
> >>> -  SmuMetrics_t SmuMetrics;
> >>> +  uint32_t CurrClock[PPCLK_COUNT];
> >>> +
> >>> +  uint16_t AverageGfxclkFrequencyPreDs;  uint16_t
> >>> + AverageGfxclkFrequencyPostDs;  uint16_t
> AverageFclkFrequencyPreDs;
> >>> + uint16_t AverageFclkFrequencyPostDs;  uint16_t
> >>> + AverageUclkFrequencyPreDs  ;  uint16_t
> AverageUclkFrequencyPostDs  ;
> >>> +
> >>> +
> >>> +  uint16_t AverageGfxActivity;
> >>> +  uint16_t AverageUclkActivity   ;
> >>> +  uint8_t  CurrSocVoltageOffset  ;
> >>> +  uint8_t  CurrGfxVoltageOffset  ;
> >>> +  uint8_t  CurrMemVidOffset  ;
> >>> +  uint8_t  Padding8;
> >>> +  uint16_t AverageSocketPower;
> >>> +  uint16_t TemperatureEdge   ;
> >>> +  uint16_t TemperatureHotspot;
> >>> +  uint16_t TemperatureMem;
> >>> +  uint16_t TemperatureVrGfx  ;
> >>> +  uint16_t TemperatureVrMem0 ;
> >>> +  uint16_t TemperatureVrMem1 ;
> >>> +  uint16_t TemperatureVrSoc  ;
> >>> +  uint16_t TemperatureLiquid0;
> >>> +  uint16_t TemperatureLiquid1;
> >>> +  uint16_t TemperaturePlx;
> >>> +  uint16_t Padding16 ;
> >>> +  uint32_t AccCnt;
> >>> +  uint8_t  ThrottlingPercentage[THROTTLER_COUNT];
> >>> +
> >>> +
> >>> +  uint8_t  LinkDpmLevel;
> >>> +  uint8_t  CurrFanPwm;
> >>> +  uint16_t CurrFanSpeed;
> >>> +
> >>> +  //BACO metrics, PMFW-1721
> >>> +  //metrics for D3hot entry/exit and driver ARM msgs  uint8_t
> >>> + D3HotEntryCountPerMode[D3HOT_SEQUENCE_COUNT];
> >>> +  uint8_t D3HotExitCountPerMode[D3HOT_SEQUENCE_COUNT];
> >>> +  uint8_t ArmMsgReceivedCountPerMode[D3HOT_SEQUENCE_COUNT];
> >>> +
> >>> +  //PMFW-4362
> >>> +  uint32_t EnergyAccumulator;
> >>> +  uint16_t AverageVclk0Frequency  ;
> >>> +  uint16_t AverageDclk0Frequency  ;
> >>> +  uint16_t AverageVclk1Frequency  ;
> >>> +  uint16_t AverageDclk1Frequency  ;
> >>> +  uint16_t VcnActivityPercentage  ; //place holder, David N. to provide
> full
> >> sequence
> >>> +  uint8_t  PcieRate   ;
> >>> +  uint8_t  PcieWidth  ;
> >>> +  uint16_t AverageGfxclkFrequencyTarget;  uint16_t Padding16_2;
> >>> +
> >>> +} SmuMetrics_V2_t;
> >>> +
> >>> +typedef struct {
> >>> +  union {
> >>> +SmuMetrics_t SmuMetrics;
> >>> +SmuMetrics_V2_t SmuMetrics_V2;
> >>> +  };
> >>>  uint32_t Spare[1];
> >>>
> >>>  // Padding - ignore
> >>> diff --git
> a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> index 0c3407025eb2..f882c6756bf0 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> @@ -80,6 +80,13 @@
> >>>   (*member) = (smu->smu_table.driver_pptable +
> >> offsetof(PPTable_t, field));\
> >>>} while(0)
> >>>
> >>> +#define GET_METRICS_MEMBER(field, member) do { \
> >>> + if ((smu->adev->asic_type == CHIP_SIENNA_CICHLID) && (smu-
> >>> smc_fw_version >= 0x3A4300)) \
> >>> + (*member) = ((void 

Re: [PATCH v4 03/17] drm/uAPI: Add "active bpc" as feedback channel for "max bpc" drm property

2021-06-30 Thread Pekka Paalanen
On Tue, 29 Jun 2021 13:02:05 +0200
Werner Sembach  wrote:

> Am 28.06.21 um 19:03 schrieb Werner Sembach:
> > Am 18.06.21 um 11:11 schrieb Werner Sembach:  
> >> Add a new general drm property "active bpc" which can be used by graphic
> >> drivers to report the applied bit depth per pixel back to userspace.
> >>
> >> While "max bpc" can be used to change the color depth, there was no way to
> >> check which one actually got used. While in theory the driver chooses the
> >> best/highest color depth within the max bpc setting a user might not be
> >> fully aware what his hardware is or isn't capable off. This is meant as a
> >> quick way to double check the setup.
> >>
> >> In the future, automatic color calibration for screens might also depend on
> >> this information being available.
> >>
> >> Signed-off-by: Werner Sembach 
> >> ---
> >>  drivers/gpu/drm/drm_connector.c | 51 +
> >>  include/drm/drm_connector.h |  8 ++
> >>  2 files changed, 59 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/drm_connector.c 
> >> b/drivers/gpu/drm/drm_connector.c
> >> index da39e7ff6965..943f6b61053b 100644
> >> --- a/drivers/gpu/drm/drm_connector.c
> >> +++ b/drivers/gpu/drm/drm_connector.c
> >> @@ -1197,6 +1197,14 @@ static const struct drm_prop_enum_list 
> >> dp_colorspaces[] = {
> >>   *drm_connector_attach_max_bpc_property() to create and attach the
> >>   *property to the connector during initialization.
> >>   *
> >> + * active bpc:
> >> + *This read-only range property tells userspace the pixel color 
> >> bit depth
> >> + *actually used by the hardware display engine on "the cable" on a
> >> + *connector. The chosen value depends on hardware capabilities, 
> >> both
> >> + *display engine and connected monitor, and the "max bpc" 
> >> property.
> >> + *Drivers shall use drm_connector_attach_active_bpc_property() to 
> >> install
> >> + *this property.
> >> + *  
> > Regarding "on the cable" and dithering: As far as I can tell, what the 
> > dithering option does, is setting a hardware
> > register here:
> >
> > - 
> > https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/i915/display/intel_display.c#L4534
> >
> > - 
> > https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/i915/display/intel_display.c#L4571
> >
> > So dithering seems to be calculated by fixed purpose hardware/firmware 
> > outside of the driver?
> >
> > The Intel driver does not seem to set a target bpc/bpp for this hardware so 
> > I guess it defaults to 6 or 8 bpc?  
> 
> Never mind it does. This switch-case does affect the dithering output:
> https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/i915/display/intel_display.c#L4537

Hi,

I obviously do not know the intel driver or hardware at all, but
to me that just looks like translating from bits per pixel to bits per
channel in RGB mapping?

> As found in this documentation p.548:
> https://01.org/sites/default/files/documentation/intel-gfx-prm-osrc-lkf-vol02c-commandreference-registers-part2.pdf
> 
> So max bpc and active bpc are affecting/affected by the bpc after dithering.

By definition, if the cable carries N bpc, then dithering does not
change that. The cable still carries N bpc, but due to spatial or
temporal dithering, the *observed* color resolution may or may not be
higher than the cable bpc.

Of course, if the cable bpc is 8, and dithering targets 6 bpc, then 2
LSB on the cable are always zero, right?

Maybe one would want to do that if the monitor has a 6 bit panel and it
simply ignored the 2 LSB, and the cable cannot go down to 6 bpc.

So, what does "max bpc" mean right now?

It seems like dither on/off is insufficient information, one would also
need to control the dithering target bpc. I suppose the driver has a
policy on how it chooses the target bpc, but what is that policy? Is
the dither target bpc the cable bpc or the sink bpc?

Needless to say, I'm quite confused.


Thanks,
pq

> >
> > Similar things happen on amd. Here the output dither depth seems to be 
> > written to a fixed value however:
> >
> > - 
> > https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c#L828
> >
> > - 
> > https://elixir.bootlin.com/linux/v5.13/source/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c#L769
> >
> > Does anyone know about a resource where I can read up on the used registers 
> > and what this hardware actually does?  
> Searching now for a similar register reference for AMD GPUs.
> >
> > My proposal for now: "max bpc" affects what happens before dither, so I 
> > would keep "active bpc" the same and add another
> > drm property "dither active: true/false". No additional property to control 
> > dither, as amdgpu does have one already
> > (which isn't always active?) and Intel driver does only seem prepared for 
> > dithering at 6bpc (albeit I don't know why to
> > dither at 6bpc and what depth to dither 

Re: [PATCH 01/11] drm/amdkfd: device pgmap owner at the svm migrate init

2021-06-30 Thread Felix Kuehling
Other than the updated patch description for patch 1, the series
(patches 1-10) is

Reviewed-by: Felix Kuehling 

Patch 11 was already reviewed and revised separately.

Thanks,
  Felix


Am 2021-06-30 um 12:00 a.m. schrieb Felix Kuehling:
> Am 2021-06-29 um 2:01 p.m. schrieb Alex Sierra:
>> pgmap owner member at the svm migrate init could be referenced
>> to either adev or hive, depending on device topology.
> Please update the commit description before submitting the change:
>
> GPUs in the same XGMI hive have direct access to all members' VRAM. When
> mapping memory to a GPU, we don't need hmm_range_fault to fault
> device-private pages in the same hive back to the host. Identifying the
> page owner as the hive, rather than the individual GPU, accomplishes this.
>
>
>> Signed-off-by: Alex Sierra 
>> Reviewed-by: Felix Kuehling 
>> ---
>>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 6 +++---
>>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 3 +++
>>  2 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c 
>> b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> index 45b5349283af..8ce71c8142aa 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> @@ -427,7 +427,7 @@ svm_migrate_vma_to_vram(struct amdgpu_device *adev, 
>> struct svm_range *prange,
>>  migrate.start = start;
>>  migrate.end = end;
>>  migrate.flags = MIGRATE_VMA_SELECT_SYSTEM;
>> -migrate.pgmap_owner = adev;
>> +migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
>>  
>>  size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t);
>>  size *= npages;
>> @@ -649,7 +649,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, 
>> struct svm_range *prange,
>>  migrate.start = start;
>>  migrate.end = end;
>>  migrate.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
>> -migrate.pgmap_owner = adev;
>> +migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
>>  
>>  size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t);
>>  size *= npages;
>> @@ -921,7 +921,7 @@ int svm_migrate_init(struct amdgpu_device *adev)
>>  pgmap->range.start = res->start;
>>  pgmap->range.end = res->end;
>>  pgmap->ops = _migrate_pgmap_ops;
>> -pgmap->owner = adev;
>> +pgmap->owner = SVM_ADEV_PGMAP_OWNER(adev);
>>  pgmap->flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
>>  r = devm_memremap_pages(adev->dev, pgmap);
>>  if (IS_ERR(r)) {
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h 
>> b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>> index a9af03994d1a..1f88bdfdbcc2 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
>> @@ -35,6 +35,9 @@
>>  #include "amdgpu.h"
>>  #include "kfd_priv.h"
>>  
>> +#define SVM_ADEV_PGMAP_OWNER(adev)\
>> +((adev)->hive ? (void *)(adev)->hive : (void *)(adev))
>> +
>>  struct svm_range_bo {
>>  struct amdgpu_bo*bo;
>>  struct kref kref;
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 01/11] drm/amdkfd: device pgmap owner at the svm migrate init

2021-06-30 Thread Felix Kuehling
Am 2021-06-29 um 2:01 p.m. schrieb Alex Sierra:
> pgmap owner member at the svm migrate init could be referenced
> to either adev or hive, depending on device topology.

Please update the commit description before submitting the change:

GPUs in the same XGMI hive have direct access to all members' VRAM. When
mapping memory to a GPU, we don't need hmm_range_fault to fault
device-private pages in the same hive back to the host. Identifying the
page owner as the hive, rather than the individual GPU, accomplishes this.


>
> Signed-off-by: Alex Sierra 
> Reviewed-by: Felix Kuehling 
> ---
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 6 +++---
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.h | 3 +++
>  2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c 
> b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 45b5349283af..8ce71c8142aa 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -427,7 +427,7 @@ svm_migrate_vma_to_vram(struct amdgpu_device *adev, 
> struct svm_range *prange,
>   migrate.start = start;
>   migrate.end = end;
>   migrate.flags = MIGRATE_VMA_SELECT_SYSTEM;
> - migrate.pgmap_owner = adev;
> + migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
>  
>   size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t);
>   size *= npages;
> @@ -649,7 +649,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct 
> svm_range *prange,
>   migrate.start = start;
>   migrate.end = end;
>   migrate.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
> - migrate.pgmap_owner = adev;
> + migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
>  
>   size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t);
>   size *= npages;
> @@ -921,7 +921,7 @@ int svm_migrate_init(struct amdgpu_device *adev)
>   pgmap->range.start = res->start;
>   pgmap->range.end = res->end;
>   pgmap->ops = _migrate_pgmap_ops;
> - pgmap->owner = adev;
> + pgmap->owner = SVM_ADEV_PGMAP_OWNER(adev);
>   pgmap->flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE;
>   r = devm_memremap_pages(adev->dev, pgmap);
>   if (IS_ERR(r)) {
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h 
> b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> index a9af03994d1a..1f88bdfdbcc2 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.h
> @@ -35,6 +35,9 @@
>  #include "amdgpu.h"
>  #include "kfd_priv.h"
>  
> +#define SVM_ADEV_PGMAP_OWNER(adev)\
> + ((adev)->hive ? (void *)(adev)->hive : (void *)(adev))
> +
>  struct svm_range_bo {
>   struct amdgpu_bo*bo;
>   struct kref kref;
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data retrieving for Sienna Cichlid

2021-06-30 Thread Lazar, Lijo




On 6/30/2021 8:56 AM, Quan, Evan wrote:

[AMD Official Use Only]




-Original Message-
From: Lazar, Lijo 
Sent: Tuesday, June 29, 2021 9:38 PM
To: Quan, Evan ; amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander 
Subject: Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data
retrieving for Sienna Cichlid



On 6/25/2021 1:42 PM, Evan Quan wrote:

Due to the structure layout change: "uint32_t ThrottlerStatus" -> "
uint8_t  ThrottlingPercentage[THROTTLER_COUNT]".

Change-Id: Id5c148b0584d972ae73fb9d7347a312944cec13d
Signed-off-by: Evan Quan 
---
   .../pm/inc/smu11_driver_if_sienna_cichlid.h   |  63 -
   .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 234 ---

---

   2 files changed, 222 insertions(+), 75 deletions(-)

diff --git
a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
index 61c87c39be80..0b916a1933df 100644
--- a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
+++ b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
@@ -211,6 +211,7 @@ typedef enum {
   #define THROTTLER_FIT_BIT  17
   #define THROTTLER_PPM_BIT  18
   #define THROTTLER_APCC_BIT 19
+#define THROTTLER_COUNT20

   // FW DState Features Control Bits
   // FW DState Features Control Bits
@@ -1406,7 +1407,67 @@ typedef struct {
   } SmuMetrics_t;

   typedef struct {
-  SmuMetrics_t SmuMetrics;
+  uint32_t CurrClock[PPCLK_COUNT];
+
+  uint16_t AverageGfxclkFrequencyPreDs;  uint16_t
+ AverageGfxclkFrequencyPostDs;  uint16_t AverageFclkFrequencyPreDs;
+ uint16_t AverageFclkFrequencyPostDs;  uint16_t
+ AverageUclkFrequencyPreDs  ;  uint16_t AverageUclkFrequencyPostDs  ;
+
+
+  uint16_t AverageGfxActivity;
+  uint16_t AverageUclkActivity   ;
+  uint8_t  CurrSocVoltageOffset  ;
+  uint8_t  CurrGfxVoltageOffset  ;
+  uint8_t  CurrMemVidOffset  ;
+  uint8_t  Padding8;
+  uint16_t AverageSocketPower;
+  uint16_t TemperatureEdge   ;
+  uint16_t TemperatureHotspot;
+  uint16_t TemperatureMem;
+  uint16_t TemperatureVrGfx  ;
+  uint16_t TemperatureVrMem0 ;
+  uint16_t TemperatureVrMem1 ;
+  uint16_t TemperatureVrSoc  ;
+  uint16_t TemperatureLiquid0;
+  uint16_t TemperatureLiquid1;
+  uint16_t TemperaturePlx;
+  uint16_t Padding16 ;
+  uint32_t AccCnt;
+  uint8_t  ThrottlingPercentage[THROTTLER_COUNT];
+
+
+  uint8_t  LinkDpmLevel;
+  uint8_t  CurrFanPwm;
+  uint16_t CurrFanSpeed;
+
+  //BACO metrics, PMFW-1721
+  //metrics for D3hot entry/exit and driver ARM msgs  uint8_t
+ D3HotEntryCountPerMode[D3HOT_SEQUENCE_COUNT];
+  uint8_t D3HotExitCountPerMode[D3HOT_SEQUENCE_COUNT];
+  uint8_t ArmMsgReceivedCountPerMode[D3HOT_SEQUENCE_COUNT];
+
+  //PMFW-4362
+  uint32_t EnergyAccumulator;
+  uint16_t AverageVclk0Frequency  ;
+  uint16_t AverageDclk0Frequency  ;
+  uint16_t AverageVclk1Frequency  ;
+  uint16_t AverageDclk1Frequency  ;
+  uint16_t VcnActivityPercentage  ; //place holder, David N. to provide full

sequence

+  uint8_t  PcieRate   ;
+  uint8_t  PcieWidth  ;
+  uint16_t AverageGfxclkFrequencyTarget;  uint16_t Padding16_2;
+
+} SmuMetrics_V2_t;
+
+typedef struct {
+  union {
+SmuMetrics_t SmuMetrics;
+SmuMetrics_V2_t SmuMetrics_V2;
+  };
 uint32_t Spare[1];

 // Padding - ignore
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
index 0c3407025eb2..f882c6756bf0 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
@@ -80,6 +80,13 @@
(*member) = (smu->smu_table.driver_pptable +

offsetof(PPTable_t, field));\

   } while(0)

+#define GET_METRICS_MEMBER(field, member) do { \
+   if ((smu->adev->asic_type == CHIP_SIENNA_CICHLID) && (smu-
smc_fw_version >= 0x3A4300)) \
+   (*member) = ((void *)(&(((SmuMetricsExternal_t *)(smu-
smu_table.metrics_table))->SmuMetrics_V2)) + offsetof(SmuMetrics_V2_t,

field)); \

+   else \
+   (*member) = ((void *)(&(((SmuMetricsExternal_t
+*)(smu->smu_table.metrics_table))->SmuMetrics)) +
+offsetof(SmuMetrics_t, field)); \ } while(0)
+
   static int get_table_size(struct smu_context *smu)
   {
if (smu->adev->asic_type == CHIP_BEIGE_GOBY) @@ -489,13

+496,33 @@

static int sienna_cichlid_tables_init(struct smu_context *smu)
return -ENOMEM;
   }

+static uint32_t sienna_cichlid_get_throttler_status_locked(struct
+smu_context *smu) {
+   struct smu_table_context *smu_table= >smu_table;
+   SmuMetricsExternal_t *metrics_ext =
+   (SmuMetricsExternal_t *)(smu_table->metrics_table);
+   uint32_t throttler_status = 0;
+   int i;
+
+   if ((smu->adev->asic_type == CHIP_SIENNA_CICHLID) &&
+(smu->smc_fw_version >= 0x3A4300)) {
+   for (i = 0; i < THROTTLER_COUNT; 

[PATCH] drm/amdgpu: Put MODE register in wave debug info

2021-06-30 Thread Joseph Greathouse
Add the MODE register into the per-wave debug information.
This register holds state such as FP rounding and denorm
modes, which exceptions are enabled, and active clamping
modes.

Signed-off-by: Joseph Greathouse 
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c  | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c  | 1 +
 5 files changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index ff7e9f49040e..abcd418e51f1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -4517,6 +4517,7 @@ static void gfx_v10_0_read_wave_data(struct amdgpu_device 
*adev, uint32_t simd,
dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_STS2);
dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_DBG1);
dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_M0);
+   dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_MODE);
 }
 
 static void gfx_v10_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t 
simd,
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
index 3a8d52a54873..6a8dadea40f9 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
@@ -3027,6 +3027,7 @@ static void gfx_v6_0_read_wave_data(struct amdgpu_device 
*adev, uint32_t simd, u
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_TMA_HI);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, 
ixSQ_WAVE_IB_DBG0);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_M0);
+   dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_MODE);
 }
 
 static void gfx_v6_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 685212c3ddae..37b4a3db6360 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -4198,6 +4198,7 @@ static void gfx_v7_0_read_wave_data(struct amdgpu_device 
*adev, uint32_t simd, u
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_TMA_HI);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, 
ixSQ_WAVE_IB_DBG0);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_M0);
+   dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_MODE);
 }
 
 static void gfx_v7_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index c26e06059466..e0302c23e9a7 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -5279,6 +5279,7 @@ static void gfx_v8_0_read_wave_data(struct amdgpu_device 
*adev, uint32_t simd, u
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_TMA_HI);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, 
ixSQ_WAVE_IB_DBG0);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_M0);
+   dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_MODE);
 }
 
 static void gfx_v8_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c 
b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 044076ec1d03..d42363fcf068 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -2080,6 +2080,7 @@ static void gfx_v9_0_read_wave_data(struct amdgpu_device 
*adev, uint32_t simd, u
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_IB_STS);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, 
ixSQ_WAVE_IB_DBG0);
dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_M0);
+   dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_MODE);
 }
 
 static void gfx_v9_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
-- 
2.20.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/2] drm/amd/pm: update the gpu metrics data retrieving for Sienna Cichlid

2021-06-30 Thread Quan, Evan
[AMD Official Use Only]



> -Original Message-
> From: Lazar, Lijo 
> Sent: Tuesday, June 29, 2021 9:38 PM
> To: Quan, Evan ; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander 
> Subject: Re: [PATCH 1/2] drm/amd/pm: update the gpu metrics data
> retrieving for Sienna Cichlid
> 
> 
> 
> On 6/25/2021 1:42 PM, Evan Quan wrote:
> > Due to the structure layout change: "uint32_t ThrottlerStatus" -> "
> > uint8_t  ThrottlingPercentage[THROTTLER_COUNT]".
> >
> > Change-Id: Id5c148b0584d972ae73fb9d7347a312944cec13d
> > Signed-off-by: Evan Quan 
> > ---
> >   .../pm/inc/smu11_driver_if_sienna_cichlid.h   |  63 -
> >   .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 234 ---
> ---
> >   2 files changed, 222 insertions(+), 75 deletions(-)
> >
> > diff --git
> > a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> > b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> > index 61c87c39be80..0b916a1933df 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> > @@ -211,6 +211,7 @@ typedef enum {
> >   #define THROTTLER_FIT_BIT  17
> >   #define THROTTLER_PPM_BIT  18
> >   #define THROTTLER_APCC_BIT 19
> > +#define THROTTLER_COUNT20
> >
> >   // FW DState Features Control Bits
> >   // FW DState Features Control Bits
> > @@ -1406,7 +1407,67 @@ typedef struct {
> >   } SmuMetrics_t;
> >
> >   typedef struct {
> > -  SmuMetrics_t SmuMetrics;
> > +  uint32_t CurrClock[PPCLK_COUNT];
> > +
> > +  uint16_t AverageGfxclkFrequencyPreDs;  uint16_t
> > + AverageGfxclkFrequencyPostDs;  uint16_t AverageFclkFrequencyPreDs;
> > + uint16_t AverageFclkFrequencyPostDs;  uint16_t
> > + AverageUclkFrequencyPreDs  ;  uint16_t AverageUclkFrequencyPostDs  ;
> > +
> > +
> > +  uint16_t AverageGfxActivity;
> > +  uint16_t AverageUclkActivity   ;
> > +  uint8_t  CurrSocVoltageOffset  ;
> > +  uint8_t  CurrGfxVoltageOffset  ;
> > +  uint8_t  CurrMemVidOffset  ;
> > +  uint8_t  Padding8;
> > +  uint16_t AverageSocketPower;
> > +  uint16_t TemperatureEdge   ;
> > +  uint16_t TemperatureHotspot;
> > +  uint16_t TemperatureMem;
> > +  uint16_t TemperatureVrGfx  ;
> > +  uint16_t TemperatureVrMem0 ;
> > +  uint16_t TemperatureVrMem1 ;
> > +  uint16_t TemperatureVrSoc  ;
> > +  uint16_t TemperatureLiquid0;
> > +  uint16_t TemperatureLiquid1;
> > +  uint16_t TemperaturePlx;
> > +  uint16_t Padding16 ;
> > +  uint32_t AccCnt;
> > +  uint8_t  ThrottlingPercentage[THROTTLER_COUNT];
> > +
> > +
> > +  uint8_t  LinkDpmLevel;
> > +  uint8_t  CurrFanPwm;
> > +  uint16_t CurrFanSpeed;
> > +
> > +  //BACO metrics, PMFW-1721
> > +  //metrics for D3hot entry/exit and driver ARM msgs  uint8_t
> > + D3HotEntryCountPerMode[D3HOT_SEQUENCE_COUNT];
> > +  uint8_t D3HotExitCountPerMode[D3HOT_SEQUENCE_COUNT];
> > +  uint8_t ArmMsgReceivedCountPerMode[D3HOT_SEQUENCE_COUNT];
> > +
> > +  //PMFW-4362
> > +  uint32_t EnergyAccumulator;
> > +  uint16_t AverageVclk0Frequency  ;
> > +  uint16_t AverageDclk0Frequency  ;
> > +  uint16_t AverageVclk1Frequency  ;
> > +  uint16_t AverageDclk1Frequency  ;
> > +  uint16_t VcnActivityPercentage  ; //place holder, David N. to provide 
> > full
> sequence
> > +  uint8_t  PcieRate   ;
> > +  uint8_t  PcieWidth  ;
> > +  uint16_t AverageGfxclkFrequencyTarget;  uint16_t Padding16_2;
> > +
> > +} SmuMetrics_V2_t;
> > +
> > +typedef struct {
> > +  union {
> > +SmuMetrics_t SmuMetrics;
> > +SmuMetrics_V2_t SmuMetrics_V2;
> > +  };
> > uint32_t Spare[1];
> >
> > // Padding - ignore
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > index 0c3407025eb2..f882c6756bf0 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > @@ -80,6 +80,13 @@
> > (*member) = (smu->smu_table.driver_pptable +
> offsetof(PPTable_t, field));\
> >   } while(0)
> >
> > +#define GET_METRICS_MEMBER(field, member) do { \
> > +   if ((smu->adev->asic_type == CHIP_SIENNA_CICHLID) && (smu-
> >smc_fw_version >= 0x3A4300)) \
> > +   (*member) = ((void *)(&(((SmuMetricsExternal_t *)(smu-
> >smu_table.metrics_table))->SmuMetrics_V2)) + offsetof(SmuMetrics_V2_t,
> field)); \
> > +   else \
> > +   (*member) = ((void *)(&(((SmuMetricsExternal_t
> > +*)(smu->smu_table.metrics_table))->SmuMetrics)) +
> > +offsetof(SmuMetrics_t, field)); \ } while(0)
> > +
> >   static int get_table_size(struct smu_context *smu)
> >   {
> > if (smu->adev->asic_type == CHIP_BEIGE_GOBY) @@ -489,13
> +496,33 @@
> > static int sienna_cichlid_tables_init(struct smu_context *smu)
> > return -ENOMEM;
> >   }
> >
> > +static uint32_t sienna_cichlid_get_throttler_status_locked(struct
> > 

Re: [PATCH] drm/amdgpu/display: drop unused variable

2021-06-30 Thread Christian König

Am 29.06.21 um 23:02 schrieb Alex Deucher:

Remove unused variable.

Fixes: 00858131205f69 ("Revert "drm/amd/display: Fix overlay validation by considering 
cursors"")
Signed-off-by: Alex Deucher 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index f14b5468b7ee..e034017daa1a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -10221,7 +10221,7 @@ static int validate_overlay(struct drm_atomic_state 
*state)
  {
int i;
struct drm_plane *plane;
-   struct drm_plane_state *old_plane_state, *new_plane_state;
+   struct drm_plane_state *new_plane_state;
struct drm_plane_state *primary_state, *overlay_state = NULL;
  
  	/* Check if primary plane is contained inside overlay */


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay

2021-06-30 Thread Wang, YuBiao
[AMD Official Use Only]

Hello Christian,

Thank you for your comment.
Others have suggested that it isn't necessary to keep checking such a low 
probability issue and we may let it fail if we continuously hit this condition. 
So I've sent a v3 patch which change the while loop into a one-time if 
condition. Could you review that version? I believe you could find it in 
another review request email.

Thank you.
Yubiao Wang

-Original Message-
From: Koenig, Christian  
Sent: Tuesday, June 29, 2021 7:16 PM
To: Wang, YuBiao ; amd-gfx@lists.freedesktop.org
Cc: Grodzovsky, Andrey ; Quan, Evan 
; Chen, Horace ; Tuikov, Luben 
; Deucher, Alexander ; Xiao, 
Jack ; Zhang, Hawking ; Liu, Monk 
; Xu, Feifei ; Wang, Kevin(Yang) 

Subject: Re: [PATCH 1/1] drm/amdgpu: Read clock counter via MMIO to reduce delay

Am 29.06.21 um 11:47 schrieb YuBiao Wang:
> [Why]
> GPU timing counters are read via KIQ under sriov, which will introduce 
> a delay.
>
> [How]
> It could be directly read by MMIO.
>
> v2: Add additional check to prevent carryover issue.
>
> Signed-off-by: YuBiao Wang 
> ---
>   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 13 +++--
>   1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index ff7e9f49040e..191b9e3ee3ea 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -7610,6 +7610,7 @@ static int gfx_v10_0_soft_reset(void *handle)
>   static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
>   {
>   uint64_t clock;
> + uint64_t clock_lo, clock_hi, hi_check;
>   
>   amdgpu_gfx_off_ctrl(adev, false);
>   mutex_lock(>gfx.gpu_clock_mutex);
> @@ -7620,8 +7621,16 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct 
> amdgpu_device *adev)
>   ((uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL);
>   break;
>   default:
> - clock = (uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER) |
> - ((uint64_t)RREG32_SOC15(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER) << 32ULL);
> + clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER);
> + hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + // If carry happens, continuously read until no carry happens
> + while (hi_check != clock_hi) {
> + clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_LOWER);
> + clock_hi = hi_check;
> + hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, 
> mmGOLDEN_TSC_COUNT_UPPER);
> + }

This could be refined into:

do {
     clock_hi =READ_...
     clock_lo = READ_
} while (unlikely(clock_hi != READ_))

Apart from that looks like a good idea to me.

Regards,
Christian.

> + clock = (uint64_t)clock_lo | ((uint64_t)clock_hi << 32ULL);
>   break;
>   }
>   mutex_unlock(>gfx.gpu_clock_mutex);
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: Conditionally reset SDMA RAS error counts

2021-06-30 Thread Clements, John
[AMD Official Use Only - Internal Distribution Only]

Reviewed-by: John Clements 

-Original Message-
From: Joshi, Mukul  
Sent: Wednesday, June 30, 2021 8:25 AM
To: amd-gfx@lists.freedesktop.org
Cc: Clements, John ; Zhang, Hawking 
; Joshi, Mukul 
Subject: [PATCH] drm/amdgpu: Conditionally reset SDMA RAS error counts

Reset SDMA RAS error counts during init only if persistent EDC harvesting is 
not supported.

Signed-off-by: Mukul Joshi 
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index f6881d99609b..8931000dcd41 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -1896,8 +1896,11 @@ static int sdma_v4_0_late_init(void *handle)
 
sdma_v4_0_setup_ulv(adev);
 
-   if (adev->sdma.funcs && adev->sdma.funcs->reset_ras_error_count)
-   adev->sdma.funcs->reset_ras_error_count(adev);
+   if (!amdgpu_persistent_edc_harvesting_supported(adev)) {
+   if (adev->sdma.funcs &&
+   adev->sdma.funcs->reset_ras_error_count)
+   adev->sdma.funcs->reset_ras_error_count(adev);
+   }
 
if (adev->sdma.funcs && adev->sdma.funcs->ras_late_init)
return adev->sdma.funcs->ras_late_init(adev, _info);
--
2.17.1
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


RE: [PATCH] drm/amdgpu: add new dimgrey cavefish DID

2021-06-30 Thread Chen, Guchun
[Public]

Reviewed-by: Guchun Chen 

Regards,
Guchun

-Original Message-
From: amd-gfx  On Behalf Of Alex Deucher
Sent: Wednesday, June 30, 2021 3:53 AM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander 
Subject: [PATCH] drm/amdgpu: add new dimgrey cavefish DID

Add new PCI device id.

Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 1a110b06cb6e..6419d75f1f18 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -1198,6 +1198,7 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x73E0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
{0x1002, 0x73E1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
{0x1002, 0x73E2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
+   {0x1002, 0x73E3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
 
/* Aldebaran */
-- 
2.31.1

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfxdata=04%7C01%7Cguchun.chen%40amd.com%7Cf8b42ee8ab8c4345d05c08d93b37805b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637605931833229462%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=8vr6oKTDlmF24Sy8Le6XIAp3OCizTtr5NXUec9jLodE%3Dreserved=0
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx