RE: [PATCH 0/7] DC Patches Nov 20 2023

2023-11-24 Thread Wheeler, Daniel
[Public]

Hi all,

This week this patchset was tested on the following systems:
* Lenovo ThinkBook T13s Gen4 with AMD Ryzen 5 6600U
* MSI Gaming X Trio RX 6800
* Gigabyte Gaming OC RX 7900 XTX

These systems were tested on the following display/connection types:
* eDP, (1080p 60hz [5650U]) (1920x1200 60hz [6600U]) (2560x1600 
120hz[6600U])
* VGA and DVI (1680x1050 60hz [DP to VGA/DVI, USB-C to VGA/DVI])
* DP/HDMI/USB-C (1440p 170hz, 4k 60hz, 4k 144hz, 4k 240hz [Includes 
USB-C to DP/HDMI adapters])
* Thunderbolt (LG Ultrafine 5k)
* MST (Startech MST14DP123DP [DP to 3x DP] and 2x 4k 60Hz displays)
* DSC (with Cable Matters 101075 [DP to 3x DP] with 3x 4k60 displays, 
and HP Hook G2 with 1 4k60 display)
* USB 4 (Kensington SD5700T and 1x 4k 60Hz display)
* PCON (Club3D CAC-1085 and 1x 4k 144Hz display [at 4k 120HZ, as that 
is the max the adapter supports])

The testing is a mix of automated and manual tests. Manual testing includes 
(but is not limited to):
* Changing display configurations and settings
* Benchmark testing
* Feature testing (Freesync, etc.)

Automated testing includes (but is not limited to):
* Script testing (scripts to automate some of the manual checks)
* IGT testing

The patchset consists of the amd-staging-drm-next branch (Head commit - 
d1619cb96246076df2a5b4a10055c51836584001  drm/amd/display: 3.2.261) with new 
patches added on top of it.

Tested on Ubuntu 22.04.3, on Wayland and X11, using KDE Plasma and Gnome.


Tested-by: Daniel Wheeler 


Thank you,

Dan Wheeler
Sr. Technologist | AMD
SW Display
--
1 Commerce Valley Dr E, Thornhill, ON L3T 7X6
amd.com

-Original Message-
From: Tom Chung 
Sent: Wednesday, November 22, 2023 1:59 AM
To: amd-gfx@lists.freedesktop.org
Cc: Wentland, Harry ; Li, Sun peng (Leo) 
; Siqueira, Rodrigo ; Pillai, 
Aurabindo ; Li, Roman ; Lin, Wayne 
; Wang, Chao-kai (Stylon) ; Gutierrez, 
Agustin ; Chung, ChiaHsuan (Tom) 
; Wu, Hersen ; Zuo, Jerry 
; Wheeler, Daniel 
Subject: [PATCH 0/7] DC Patches Nov 20 2023

This DC patchset brings improvements in multiple areas. In summary, we have:

- Add DSC granular throughput adjustment
- Allow DTBCLK disable for DCN35
- Update Fixed VS/PE Retimer Sequence
- Block dcn315 dynamic crb allocation when unintended
- Update dcn315 lpddr pstate latency

Cc: Daniel Wheeler 

Anthony Koo (1):
  drm/amd/display: [FW Promotion] Release 0.0.194.0

Aric Cyr (1):
  drm/amd/display: Promote DAL to 3.2.262

Dmytro Laktyushkin (2):
  drm/amd/display: update dcn315 lpddr pstate latency
  drm/amd/display: block dcn315 dynamic crb allocation when unintended

Ilya Bakoulin (1):
  drm/amd/display: Add DSC granular throughput adjustment

Michael Strauss (1):
  drm/amd/display: Update Fixed VS/PE Retimer Sequence

Nicholas Kazlauskas (1):
  drm/amd/display: Allow DTBCLK disable for DCN35

 .../dc/clk_mgr/dcn315/dcn315_clk_mgr.c|  8 +++---
 .../display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c  | 27 +--
 drivers/gpu/drm/amd/display/dc/dc.h   |  2 +-
 drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c   | 10 +--
 .../link_dp_training_fixed_vs_pe_retimer.c| 10 +++
 .../dc/resource/dcn315/dcn315_resource.c  |  6 +++--
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  4 +++
 7 files changed, 43 insertions(+), 24 deletions(-)

--
2.25.1



Re: [PATCH v3] drm/amdkfd: Run restore_workers on freezable WQs

2023-11-24 Thread Lazar, Lijo




On 11/24/2023 4:25 AM, Felix Kuehling wrote:

Make restore workers freezable so we don't have to explicitly flush them
in suspend and GPU reset code paths, and we don't accidentally try to
restore BOs while the GPU is suspended. Not having to flush restore_work
also helps avoid lock/fence dependencies in the GPU reset case where we're
not allowed to wait for fences.

A side effect of this is, that we can now have multiple concurrent threads
trying to signal the same eviction fence. Rework eviction fence signaling
and replacement to account for that.

The GPU reset path can no longer rely on restore_process_worker to resume
queues because evict/restore workers can run independently of it. Instead
call a new restore_process_helper directly.


Not familiar with this code. For clarity, does this mean 
eviction/restore may happen while a reset is in progress?


Thanks,
Lijo



This is an RFC and request for testing.

v2:
- Reworked eviction fence signaling
- Introduced restore_process_helper

v3:
- Handle unsignaled eviction fences in restore_process_bos

Signed-off-by: Felix Kuehling 
Acked-by: Christian König 
---
  .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 68 +++
  drivers/gpu/drm/amd/amdkfd/kfd_process.c  | 87 +++
  drivers/gpu/drm/amd/amdkfd/kfd_svm.c  |  4 +-
  3 files changed, 104 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 2e302956a279..bdec88713a09 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1431,7 +1431,6 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
**process_info,
  amdgpu_amdkfd_restore_userptr_worker);
  
  		*process_info = info;

-   *ef = dma_fence_get(&info->eviction_fence->base);
}
  
  	vm->process_info = *process_info;

@@ -1462,6 +1461,8 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
**process_info,
list_add_tail(&vm->vm_list_node,
&(vm->process_info->vm_list_head));
vm->process_info->n_vms++;
+
+   *ef = dma_fence_get(&vm->process_info->eviction_fence->base);
mutex_unlock(&vm->process_info->lock);
  
  	return 0;

@@ -1473,10 +1474,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void 
**process_info,
  reserve_pd_fail:
vm->process_info = NULL;
if (info) {
-   /* Two fence references: one in info and one in *ef */
dma_fence_put(&info->eviction_fence->base);
-   dma_fence_put(*ef);
-   *ef = NULL;
*process_info = NULL;
put_pid(info->pid);
  create_evict_fence_fail:
@@ -1670,7 +1668,8 @@ int amdgpu_amdkfd_criu_resume(void *p)
goto out_unlock;
}
WRITE_ONCE(pinfo->block_mmu_notifications, false);
-   schedule_delayed_work(&pinfo->restore_userptr_work, 0);
+   queue_delayed_work(system_freezable_wq,
+  &pinfo->restore_userptr_work, 0);
  
  out_unlock:

mutex_unlock(&pinfo->lock);
@@ -2475,7 +2474,8 @@ int amdgpu_amdkfd_evict_userptr(struct 
mmu_interval_notifier *mni,
   KFD_QUEUE_EVICTION_TRIGGER_USERPTR);
if (r)
pr_err("Failed to quiesce KFD\n");
-   schedule_delayed_work(&process_info->restore_userptr_work,
+   queue_delayed_work(system_freezable_wq,
+   &process_info->restore_userptr_work,
msecs_to_jiffies(AMDGPU_USERPTR_RESTORE_DELAY_MS));
}
mutex_unlock(&process_info->notifier_lock);
@@ -2810,7 +2810,8 @@ static void amdgpu_amdkfd_restore_userptr_worker(struct 
work_struct *work)
  
  	/* If validation failed, reschedule another attempt */

if (evicted_bos) {
-   schedule_delayed_work(&process_info->restore_userptr_work,
+   queue_delayed_work(system_freezable_wq,
+   &process_info->restore_userptr_work,
msecs_to_jiffies(AMDGPU_USERPTR_RESTORE_DELAY_MS));
  
  		kfd_smi_event_queue_restore_rescheduled(mm);

@@ -2819,6 +2820,23 @@ static void amdgpu_amdkfd_restore_userptr_worker(struct 
work_struct *work)
put_task_struct(usertask);
  }
  
+static void replace_eviction_fence(struct dma_fence **ef,

+  struct dma_fence *new_ef)
+{
+   struct dma_fence *old_ef = rcu_replace_pointer(*ef, new_ef, true
+   /* protected by process_info->lock */);
+
+   /* If we're replacing an unsignaled eviction fence, that fence will
+* never be signaled, and if anyone is still waiting on that fence,
+* they will hang forever. This should never happen. We should only
+* replace the fence in restore_work that only gets scheduled after
+* eviction work signaled the fence.

RE: [PATCH v3] drm/amdkfd: Run restore_workers on freezable WQs

2023-11-24 Thread Deng, Emily
[AMD Official Use Only - General]

Tested-by: Emily Deng 

>-Original Message-
>From: Kuehling, Felix 
>Sent: Friday, November 24, 2023 6:55 AM
>To: amd-gfx@lists.freedesktop.org
>Cc: Deng, Emily ; Pan, Xinhui
>; Koenig, Christian 
>Subject: [PATCH v3] drm/amdkfd: Run restore_workers on freezable WQs
>
>Make restore workers freezable so we don't have to explicitly flush them in
>suspend and GPU reset code paths, and we don't accidentally try to restore
>BOs while the GPU is suspended. Not having to flush restore_work also helps
>avoid lock/fence dependencies in the GPU reset case where we're not allowed
>to wait for fences.
>
>A side effect of this is, that we can now have multiple concurrent threads 
>trying
>to signal the same eviction fence. Rework eviction fence signaling and
>replacement to account for that.
>
>The GPU reset path can no longer rely on restore_process_worker to resume
>queues because evict/restore workers can run independently of it. Instead call
>a new restore_process_helper directly.
>
>This is an RFC and request for testing.
>
>v2:
>- Reworked eviction fence signaling
>- Introduced restore_process_helper
>
>v3:
>- Handle unsignaled eviction fences in restore_process_bos
>
>Signed-off-by: Felix Kuehling 
>Acked-by: Christian König 
>---
> .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 68 +++
> drivers/gpu/drm/amd/amdkfd/kfd_process.c  | 87 +++
> drivers/gpu/drm/amd/amdkfd/kfd_svm.c  |  4 +-
> 3 files changed, 104 insertions(+), 55 deletions(-)
>
>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>index 2e302956a279..bdec88713a09 100644
>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>@@ -1431,7 +1431,6 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void
>**process_info,
> amdgpu_amdkfd_restore_userptr_worker);
>
>   *process_info = info;
>-  *ef = dma_fence_get(&info->eviction_fence->base);
>   }
>
>   vm->process_info = *process_info;
>@@ -1462,6 +1461,8 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void
>**process_info,
>   list_add_tail(&vm->vm_list_node,
>   &(vm->process_info->vm_list_head));
>   vm->process_info->n_vms++;
>+
>+  *ef = dma_fence_get(&vm->process_info->eviction_fence->base);
>   mutex_unlock(&vm->process_info->lock);
>
>   return 0;
>@@ -1473,10 +1474,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void
>**process_info,
> reserve_pd_fail:
>   vm->process_info = NULL;
>   if (info) {
>-  /* Two fence references: one in info and one in *ef */
>   dma_fence_put(&info->eviction_fence->base);
>-  dma_fence_put(*ef);
>-  *ef = NULL;
>   *process_info = NULL;
>   put_pid(info->pid);
> create_evict_fence_fail:
>@@ -1670,7 +1668,8 @@ int amdgpu_amdkfd_criu_resume(void *p)
>   goto out_unlock;
>   }
>   WRITE_ONCE(pinfo->block_mmu_notifications, false);
>-  schedule_delayed_work(&pinfo->restore_userptr_work, 0);
>+  queue_delayed_work(system_freezable_wq,
>+ &pinfo->restore_userptr_work, 0);
>
> out_unlock:
>   mutex_unlock(&pinfo->lock);
>@@ -2475,7 +2474,8 @@ int amdgpu_amdkfd_evict_userptr(struct
>mmu_interval_notifier *mni,
>
>KFD_QUEUE_EVICTION_TRIGGER_USERPTR);
>   if (r)
>   pr_err("Failed to quiesce KFD\n");
>-  schedule_delayed_work(&process_info-
>>restore_userptr_work,
>+  queue_delayed_work(system_freezable_wq,
>+  &process_info->restore_userptr_work,
>
>   msecs_to_jiffies(AMDGPU_USERPTR_RESTORE_DELAY_MS));
>   }
>   mutex_unlock(&process_info->notifier_lock);
>@@ -2810,7 +2810,8 @@ static void
>amdgpu_amdkfd_restore_userptr_worker(struct work_struct *work)
>
>   /* If validation failed, reschedule another attempt */
>   if (evicted_bos) {
>-  schedule_delayed_work(&process_info-
>>restore_userptr_work,
>+  queue_delayed_work(system_freezable_wq,
>+  &process_info->restore_userptr_work,
>
>   msecs_to_jiffies(AMDGPU_USERPTR_RESTORE_DELAY_MS));
>
>   kfd_smi_event_queue_restore_rescheduled(mm);
>@@ -2819,6 +2820,23 @@ static void
>amdgpu_amdkfd_restore_userptr_worker(struct work_struct *work)
>   put_task_struct(usertask);
> }
>
>+static void replace_eviction_fence(struct dma_fence **ef,
>+ struct dma_fence *new_ef)
>+{
>+  struct dma_fence *old_ef = rcu_replace_pointer(*ef, new_ef, true
>+  /* protected by process_info->lock */);
>+
>+  /* If we're replacing an unsignaled eviction fence, that fence will
>+   * never be signaled, and if anyone is still waiting on that fence,
>+   * they will hang forever. This should never happen. 

Re: [PATCH] drm/amdgpu: Fix cat debugfs amdgpu_regs_didt causes kernel null pointer

2023-11-24 Thread kernel test robot
Hi Lu,

kernel test robot noticed the following build errors:

[auto build test ERROR on drm-misc/drm-misc-next]
[also build test ERROR on linus/master v6.7-rc2 next-20231124]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:
https://github.com/intel-lab-lkp/linux/commits/Lu-Yao/drm-amdgpu-Fix-cat-debugfs-amdgpu_regs_didt-causes-kernel-null-pointer/20231122-203138
base:   git://anongit.freedesktop.org/drm/drm-misc drm-misc-next
patch link:
https://lore.kernel.org/r/20231122093509.34302-1-yaolu%40kylinos.cn
patch subject: [PATCH] drm/amdgpu: Fix cat debugfs amdgpu_regs_didt causes 
kernel null pointer
config: x86_64-randconfig-001-20231123 
(https://download.01.org/0day-ci/archive/20231124/202311241442.f0s4bazk-...@intel.com/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git 
ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): 
(https://download.01.org/0day-ci/archive/20231124/202311241442.f0s4bazk-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202311241442.f0s4bazk-...@intel.com/

All errors (new ones prefixed by >>):

   warning: unknown warning option '-Wstringop-truncation'; did you mean 
'-Wstring-concatenation'? [-Wunknown-warning-option]
   warning: unknown warning option '-Wpacked-not-aligned'; did you mean 
'-Wpacked-non-pod'? [-Wunknown-warning-option]
>> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:642:55: error: use of undeclared 
>> identifier '__FUNC__'
   dev_err(adev->dev, "%s adev->didt_rreg is null!\n", 
__FUNC__);
   ^
   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:703:55: error: use of undeclared 
identifier '__FUNC__'
   dev_err(adev->dev, "%s adev->didt_wreg is null!\n", 
__FUNC__);
   ^
   2 warnings and 2 errors generated.


vim +/__FUNC__ +642 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c

   618  
   619  /**
   620   * amdgpu_debugfs_regs_didt_read - Read from a DIDT register
   621   *
   622   * @f: open file handle
   623   * @buf: User buffer to store read data in
   624   * @size: Number of bytes to read
   625   * @pos:  Offset to seek to
   626   *
   627   * The lower bits are the BYTE offset of the register to read.  This
   628   * allows reading multiple registers in a single call and having
   629   * the returned size reflect that.
   630   */
   631  static ssize_t amdgpu_debugfs_regs_didt_read(struct file *f, char 
__user *buf,
   632  size_t size, loff_t *pos)
   633  {
   634  struct amdgpu_device *adev = file_inode(f)->i_private;
   635  ssize_t result = 0;
   636  int r;
   637  
   638  if (size & 0x3 || *pos & 0x3)
   639  return -EINVAL;
   640  
   641  if (adev->didt_rreg == NULL) {
 > 642  dev_err(adev->dev, "%s adev->didt_rreg is null!\n", 
 > __FUNC__);
   643  return -EPERM;
   644  }
   645  
   646  r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
   647  if (r < 0) {
   648  pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
   649  return r;
   650  }
   651  
   652  r = amdgpu_virt_enable_access_debugfs(adev);
   653  if (r < 0) {
   654  pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
   655  return r;
   656  }
   657  
   658  while (size) {
   659  uint32_t value;
   660  
   661  value = RREG32_DIDT(*pos >> 2);
   662  r = put_user(value, (uint32_t *)buf);
   663  if (r)
   664  goto out;
   665  
   666  result += 4;
   667  buf += 4;
   668  *pos += 4;
   669  size -= 4;
   670  }
   671  
   672  r = result;
   673  out:
   674  pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
   675  pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
   676  amdgpu_virt_disable_access_debugfs(adev);
   677  return r;
   678  }
   679  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


[PATCH v5 00/20] remove I2C_CLASS_DDC support

2023-11-24 Thread Heiner Kallweit
After removal of the legacy EEPROM driver and I2C_CLASS_DDC support in
olpc_dcon there's no i2c client driver left supporting I2C_CLASS_DDC.
Class-based device auto-detection is a legacy mechanism and shouldn't
be used in new code. So we can remove this class completely now.

Preferably this series should be applied via the i2c tree.

v2:
- change tag in commit subject of patch 03
- add ack tags
v3:
- fix a compile error in patch 5
v4:
- more ack and review tags
v5:
- more acks

Signed-off-by: Heiner Kallweit 

---

 drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c   |1 -
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |1 -
 drivers/gpu/drm/ast/ast_i2c.c |1 -
 drivers/gpu/drm/bridge/synopsys/dw-hdmi.c |1 -
 drivers/gpu/drm/display/drm_dp_helper.c   |1 -
 drivers/gpu/drm/display/drm_dp_mst_topology.c |1 -
 drivers/gpu/drm/gma500/cdv_intel_dp.c |1 -
 drivers/gpu/drm/gma500/intel_gmbus.c  |1 -
 drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c|1 -
 drivers/gpu/drm/gma500/psb_intel_sdvo.c   |1 -
 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c   |1 -
 drivers/gpu/drm/i915/display/intel_gmbus.c|1 -
 drivers/gpu/drm/i915/display/intel_sdvo.c |1 -
 drivers/gpu/drm/loongson/lsdc_i2c.c   |1 -
 drivers/gpu/drm/mediatek/mtk_hdmi_ddc.c   |1 -
 drivers/gpu/drm/mgag200/mgag200_i2c.c |1 -
 drivers/gpu/drm/msm/hdmi/hdmi_i2c.c   |1 -
 drivers/gpu/drm/radeon/radeon_i2c.c   |1 -
 drivers/gpu/drm/rockchip/inno_hdmi.c  |1 -
 drivers/gpu/drm/rockchip/rk3066_hdmi.c|1 -
 drivers/gpu/drm/sun4i/sun4i_hdmi_i2c.c|1 -
 drivers/video/fbdev/core/fb_ddc.c |1 -
 drivers/video/fbdev/cyber2000fb.c |1 -
 drivers/video/fbdev/i740fb.c  |1 -
 drivers/video/fbdev/intelfb/intelfb_i2c.c |   15 +--
 drivers/video/fbdev/matrox/i2c-matroxfb.c |   12 
 drivers/video/fbdev/s3fb.c|1 -
 drivers/video/fbdev/tdfxfb.c  |1 -
 drivers/video/fbdev/tridentfb.c   |1 -
 drivers/video/fbdev/via/via_i2c.c |1 -
 include/linux/i2c.h   |1 -
 31 files changed, 9 insertions(+), 47 deletions(-)


[PATCH v5 03/20] drm/amd/display: remove I2C_CLASS_DDC support

2023-11-24 Thread Heiner Kallweit
After removal of the legacy EEPROM driver and I2C_CLASS_DDC support in
olpc_dcon there's no i2c client driver left supporting I2C_CLASS_DDC.
Class-based device auto-detection is a legacy mechanism and shouldn't
be used in new code. So we can remove this class completely now.

Preferably this series should be applied via the i2c tree.

Acked-by: Harry Wentland 
Acked-by: Alex Deucher 
Acked-by: Thomas Zimmermann 
Signed-off-by: Heiner Kallweit 

---
v2:
- adjust tag in commit subject
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 6f99f6754..ae1edc6ab 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -7529,7 +7529,6 @@ create_i2c(struct ddc_service *ddc_service,
if (!i2c)
return NULL;
i2c->base.owner = THIS_MODULE;
-   i2c->base.class = I2C_CLASS_DDC;
i2c->base.dev.parent = &adev->pdev->dev;
i2c->base.algo = &amdgpu_dm_i2c_algo;
snprintf(i2c->base.name, sizeof(i2c->base.name), "AMDGPU DM i2c hw bus 
%d", link_index);



Re: [PATCH 3/3] drm/amd/display: Support DRM_AMD_DC_FP on RISC-V

2023-11-24 Thread Conor Dooley
On Tue, Nov 21, 2023 at 07:05:15PM -0800, Samuel Holland wrote:
> RISC-V uses kernel_fpu_begin()/kernel_fpu_end() like several other
> architectures. Enabling hardware FP requires overriding the ISA string
> for the relevant compilation units.

Ah yes, bringing the joy of frame-larger-than warnings to RISC-V:
../drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn32/display_mode_vba_32.c:58:13:
 warning: stack frame size (2416) exceeds limit (2048) in 
'DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation'
 [-Wframe-larger-than]

Nathan, have you given up on these being sorted out?

Also, what on earth is that function name, it exceeds 80 characters
before even considering anything else? Actually, I don't think I want
to know.


signature.asc
Description: PGP signature


[PATCH v5 07/20] drivers/gpu/drm: remove I2C_CLASS_DDC support

2023-11-24 Thread Heiner Kallweit
After removal of the legacy EEPROM driver and I2C_CLASS_DDC support in
olpc_dcon there's no i2c client driver left supporting I2C_CLASS_DDC.
Class-based device auto-detection is a legacy mechanism and shouldn't
be used in new code. So we can remove this class completely now.

Preferably this series should be applied via the i2c tree.

Acked-by: Alex Deucher 
Acked-by: Thomas Zimmermann 
Signed-off-by: Heiner Kallweit 

---
 drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c |1 -
 drivers/gpu/drm/radeon/radeon_i2c.c |1 -
 2 files changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
index 82608df43..d79cb13e1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
@@ -175,7 +175,6 @@ struct amdgpu_i2c_chan *amdgpu_i2c_create(struct drm_device 
*dev,
 
i2c->rec = *rec;
i2c->adapter.owner = THIS_MODULE;
-   i2c->adapter.class = I2C_CLASS_DDC;
i2c->adapter.dev.parent = dev->dev;
i2c->dev = dev;
i2c_set_adapdata(&i2c->adapter, i2c);
diff --git a/drivers/gpu/drm/radeon/radeon_i2c.c 
b/drivers/gpu/drm/radeon/radeon_i2c.c
index 314d066e6..3d174390a 100644
--- a/drivers/gpu/drm/radeon/radeon_i2c.c
+++ b/drivers/gpu/drm/radeon/radeon_i2c.c
@@ -918,7 +918,6 @@ struct radeon_i2c_chan *radeon_i2c_create(struct drm_device 
*dev,
 
i2c->rec = *rec;
i2c->adapter.owner = THIS_MODULE;
-   i2c->adapter.class = I2C_CLASS_DDC;
i2c->adapter.dev.parent = dev->dev;
i2c->dev = dev;
i2c_set_adapdata(&i2c->adapter, i2c);



Re: [PATCH 1/3] Revert "drm/prime: Unexport helpers for fd/handle conversion"

2023-11-24 Thread Thomas Zimmermann

Hi

Am 23.11.23 um 20:36 schrieb Felix Kuehling:

[+Alex]

On 2023-11-17 16:44, Felix Kuehling wrote:


This reverts commit 71a7974ac7019afeec105a54447ae1dc7216cbb3.

These helper functions are needed for KFD to export and import DMABufs
the right way without duplicating the tracking of DMABufs associated with
GEM objects while ensuring that move notifier callbacks are working as
intended.

CC: Christian König 
CC: Thomas Zimmermann 
Signed-off-by: Felix Kuehling 


Re: our discussion about v2 of this patch: If this version is 
acceptable, can I get an R-b or A-b?


Acked-by: Thomas Zimmermann 

for patch 1.



I would like to get this patch into drm-next as a prerequisite for 
patches 2 and 3. I cannot submit it to the current amd-staging-drm-next 
because the patch I'm reverting doesn't exist there yet.


Patch 2 and 3 could go into drm-next as well, or go through Alex's 
amd-staging-drm-next branch once patch 1 is in drm-next. Alex, how do 
you prefer to coordinate this?


Regards,
   Felix



---
  drivers/gpu/drm/drm_prime.c | 33 ++---
  include/drm/drm_prime.h |  7 +++
  2 files changed, 25 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 63b709a67471..834a5e28abbe 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -278,7 +278,7 @@ void drm_gem_dmabuf_release(struct dma_buf *dma_buf)
  }
  EXPORT_SYMBOL(drm_gem_dmabuf_release);
-/*
+/**
   * drm_gem_prime_fd_to_handle - PRIME import function for GEM drivers
   * @dev: drm_device to import into
   * @file_priv: drm file-private structure
@@ -292,9 +292,9 @@ EXPORT_SYMBOL(drm_gem_dmabuf_release);
   *
   * Returns 0 on success or a negative error code on failure.
   */
-static int drm_gem_prime_fd_to_handle(struct drm_device *dev,
-  struct drm_file *file_priv, int prime_fd,
-  uint32_t *handle)
+int drm_gem_prime_fd_to_handle(struct drm_device *dev,
+   struct drm_file *file_priv, int prime_fd,
+   uint32_t *handle)
  {
  struct dma_buf *dma_buf;
  struct drm_gem_object *obj;
@@ -360,6 +360,7 @@ static int drm_gem_prime_fd_to_handle(struct 
drm_device *dev,

  dma_buf_put(dma_buf);
  return ret;
  }
+EXPORT_SYMBOL(drm_gem_prime_fd_to_handle);
  int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
   struct drm_file *file_priv)
@@ -408,7 +409,7 @@ static struct dma_buf 
*export_and_register_object(struct drm_device *dev,

  return dmabuf;
  }
-/*
+/**
   * drm_gem_prime_handle_to_fd - PRIME export function for GEM drivers
   * @dev: dev to export the buffer from
   * @file_priv: drm file-private structure
@@ -421,10 +422,10 @@ static struct dma_buf 
*export_and_register_object(struct drm_device *dev,
   * The actual exporting from GEM object to a dma-buf is done through 
the

   * &drm_gem_object_funcs.export callback.
   */
-static int drm_gem_prime_handle_to_fd(struct drm_device *dev,
-  struct drm_file *file_priv, uint32_t handle,
-  uint32_t flags,
-  int *prime_fd)
+int drm_gem_prime_handle_to_fd(struct drm_device *dev,
+   struct drm_file *file_priv, uint32_t handle,
+   uint32_t flags,
+   int *prime_fd)
  {
  struct drm_gem_object *obj;
  int ret = 0;
@@ -506,6 +507,7 @@ static int drm_gem_prime_handle_to_fd(struct 
drm_device *dev,

  return ret;
  }
+EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
  int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
   struct drm_file *file_priv)
@@ -864,9 +866,9 @@ EXPORT_SYMBOL(drm_prime_get_contiguous_size);
   * @obj: GEM object to export
   * @flags: flags like DRM_CLOEXEC and DRM_RDWR
   *
- * This is the implementation of the &drm_gem_object_funcs.export 
functions
- * for GEM drivers using the PRIME helpers. It is used as the default 
for

- * drivers that do not set their own.
+ * This is the implementation of the &drm_gem_object_funcs.export 
functions for GEM drivers

+ * using the PRIME helpers. It is used as the default in
+ * drm_gem_prime_handle_to_fd().
   */
  struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
   int flags)
@@ -962,9 +964,10 @@ EXPORT_SYMBOL(drm_gem_prime_import_dev);
   * @dev: drm_device to import into
   * @dma_buf: dma-buf object to import
   *
- * This is the implementation of the gem_prime_import functions for GEM
- * drivers using the PRIME helpers. It is the default for drivers 
that do

- * not set their own &drm_driver.gem_prime_import.
+ * This is the implementation of the gem_prime_import functions for 
GEM drivers

+ * using the PRIME helpers. Drivers can use this as their
+ * &drm_driver.gem_prime_import implementation. It is used as the 
default

+ * implementation in drm_gem_prime_fd_to_handle().
   *
   * Drivers must arrange to call dr