Re: [Freedreno] [PATCH 1/2] dt-bindings: display/msm: document DPU on SM7150

2023-08-04 Thread Rob Herring


On Thu, 03 Aug 2023 22:47:23 +0300, Danila Tikhonov wrote:
> Document the DPU hardware found on the Qualcomm SM7150 platform.
> 
> Signed-off-by: Danila Tikhonov 
> ---
>  .../bindings/display/msm/qcom,sm7150-dpu.yaml | 116 ++
>  1 file changed, 116 insertions(+)
>  create mode 100644 
> Documentation/devicetree/bindings/display/msm/qcom,sm7150-dpu.yaml
> 

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:

dtschema/dtc warnings/errors:
Documentation/devicetree/bindings/display/msm/qcom,sm7150-dpu.example.dts:24:18:
 fatal error: dt-bindings/clock/qcom,sm7150-dispcc.h: No such file or directory
   24 | #include 
  |  ^~~~
compilation terminated.
make[2]: *** [scripts/Makefile.lib:419: 
Documentation/devicetree/bindings/display/msm/qcom,sm7150-dpu.example.dtb] 
Error 1
make[2]: *** Waiting for unfinished jobs
make[1]: *** [/builds/robherring/dt-review-ci/linux/Makefile:1500: 
dt_binding_check] Error 2
make: *** [Makefile:234: __sub-make] Error 2

doc reference errors (make refcheckdocs):

See 
https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20230803194724.154591-2-dan...@jiaxyga.com

The base for the series is generally the latest rc1. A different dependency
should be noted in *this* patch.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit after running the above command yourself. Note
that DT_SCHEMA_FILES can be set to your schema file to speed up checking
your schema. However, it must be unset to test all examples with your schema.



[Freedreno] [RFC] PM / QoS: Decouple request alloc from dev_pm_qos_mtx (alternative solution)

2023-08-04 Thread Rob Clark
From: Rob Clark 

Similar to the previous patch, move the allocation out from under
dev_pm_qos_mtx, by speculatively doing the allocation and handle
any race after acquiring dev_pm_qos_mtx by freeing the redundant
allocation.

Suggested-by: Rafael J. Wysocki 
Signed-off-by: Rob Clark 
---
This is an alternative to 
https://patchwork.freedesktop.org/patch/551417/?series=115028&rev=4

So, this does _slightly_ change error paths, for ex
dev_pm_qos_update_user_latency_tolerance() will now allocate
dev->power.qos in some error cases.  But this seems harmless?
A slightly more complicated version of this could conserve the
previous error path behavior, but I figured I'd try the simpler
thing first.

 drivers/base/power/qos.c | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index 1b73a704aac1..c7ba85e89c42 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -920,8 +920,12 @@ s32 dev_pm_qos_get_user_latency_tolerance(struct device 
*dev)
 int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
 {
struct dev_pm_qos *qos = dev_pm_qos_constraints_allocate();
+   struct dev_pm_qos_request *req = NULL;
int ret = 0;
 
+   if (!dev->power.qos->latency_tolerance_req)
+   req = kzalloc(sizeof(*req), GFP_KERNEL);
+
mutex_lock(&dev_pm_qos_mtx);
 
dev_pm_qos_constraints_set(dev, qos);
@@ -935,8 +939,6 @@ int dev_pm_qos_update_user_latency_tolerance(struct device 
*dev, s32 val)
goto out;
 
if (!dev->power.qos->latency_tolerance_req) {
-   struct dev_pm_qos_request *req;
-
if (val < 0) {
if (val == PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT)
ret = 0;
@@ -944,17 +946,15 @@ int dev_pm_qos_update_user_latency_tolerance(struct 
device *dev, s32 val)
ret = -EINVAL;
goto out;
}
-   req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto out;
}
ret = __dev_pm_qos_add_request(dev, req, 
DEV_PM_QOS_LATENCY_TOLERANCE, val);
-   if (ret < 0) {
-   kfree(req);
+   if (ret < 0)
goto out;
-   }
dev->power.qos->latency_tolerance_req = req;
+   req = NULL;
} else {
if (val < 0) {
__dev_pm_qos_drop_user_request(dev, 
DEV_PM_QOS_LATENCY_TOLERANCE);
@@ -966,6 +966,7 @@ int dev_pm_qos_update_user_latency_tolerance(struct device 
*dev, s32 val)
 
  out:
mutex_unlock(&dev_pm_qos_mtx);
+   kfree(req);
return ret;
 }
 EXPORT_SYMBOL_GPL(dev_pm_qos_update_user_latency_tolerance);
-- 
2.41.0



Re: [Freedreno] [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rafael J. Wysocki
On Fri, Aug 4, 2023 at 10:38 PM Rob Clark  wrote:
>
> On Fri, Aug 4, 2023 at 12:11 PM Rafael J. Wysocki  wrote:
> >
> > On Fri, Aug 4, 2023 at 8:38 PM Rob Clark  wrote:
> > >
> > > On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  
> > > wrote:
> > > >
> > > > On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> > > > >
> > > > > From: Rob Clark 
> > > > >
> > > > > In the process of adding lockdep annotation for drm GPU scheduler's
> > > > > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > > > > this lockdep splat:
> > > > >
> > > > >==
> > > > >WARNING: possible circular locking dependency detected
> > > > >6.2.0-rc8-debug+ #558 Tainted: GW
> > > > >--
> > > > >ring0/125 is trying to acquire lock:
> > > > >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > > > > dev_pm_qos_update_request+0x38/0x68
> > > > >
> > > > >but task is already holding lock:
> > > > >ff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > > > > msm_gpu_submit+0xec/0x178
> > > > >
> > > > >which lock already depends on the new lock.
> > > > >
> > > > >the existing dependency chain (in reverse order) is:
> > > > >
> > > > >-> #4 (&gpu->active_lock){+.+.}-{3:3}:
> > > > >   __mutex_lock+0xcc/0x3c8
> > > > >   mutex_lock_nested+0x30/0x44
> > > > >   msm_gpu_submit+0xec/0x178
> > > > >   msm_job_run+0x78/0x150
> > > > >   drm_sched_main+0x290/0x370
> > > > >   kthread+0xf0/0x100
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #3 (dma_fence_map){}-{0:0}:
> > > > >   __dma_fence_might_wait+0x74/0xc0
> > > > >   dma_resv_lockdep+0x1f4/0x2f4
> > > > >   do_one_initcall+0x104/0x2bc
> > > > >   kernel_init_freeable+0x344/0x34c
> > > > >   kernel_init+0x30/0x134
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> > > > >   fs_reclaim_acquire+0x80/0xa8
> > > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > > >   __kmalloc+0xd8/0x100
> > > > >   topology_parse_cpu_capacity+0x8c/0x178
> > > > >   get_cpu_for_node+0x88/0xc4
> > > > >   parse_cluster+0x1b0/0x28c
> > > > >   parse_cluster+0x8c/0x28c
> > > > >   init_cpu_topology+0x168/0x188
> > > > >   smp_prepare_cpus+0x24/0xf8
> > > > >   kernel_init_freeable+0x18c/0x34c
> > > > >   kernel_init+0x30/0x134
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #1 (fs_reclaim){+.+.}-{0:0}:
> > > > >   __fs_reclaim_acquire+0x3c/0x48
> > > > >   fs_reclaim_acquire+0x54/0xa8
> > > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > > >   kmalloc_trace+0x50/0xa8
> > > > >   dev_pm_qos_constraints_allocate+0x38/0x100
> > > > >   __dev_pm_qos_add_request+0xb0/0x1e8
> > > > >   dev_pm_qos_add_request+0x58/0x80
> > > > >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> > > > >   register_cpu+0x12c/0x130
> > > > >   topology_init+0xac/0xbc
> > > > >   do_one_initcall+0x104/0x2bc
> > > > >   kernel_init_freeable+0x344/0x34c
> > > > >   kernel_init+0x30/0x134
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> > > > >   __lock_acquire+0xe00/0x1060
> > > > >   lock_acquire+0x1e0/0x2f8
> > > > >   __mutex_lock+0xcc/0x3c8
> > > > >   mutex_lock_nested+0x30/0x44
> > > > >   dev_pm_qos_update_request+0x38/0x68
> > > > >   msm_devfreq_boost+0x40/0x70
> > > > >   msm_devfreq_active+0xc0/0xf0
> > > > >   msm_gpu_submit+0x10c/0x178
> > > > >   msm_job_run+0x78/0x150
> > > > >   drm_sched_main+0x290/0x370
> > > > >   kthread+0xf0/0x100
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >other info that might help us debug this:
> > > > >
> > > > >Chain exists of:
> > > > >  dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock
> > > > >
> > > > > Possible unsafe locking scenario:
> > > > >
> > > > >   CPU0CPU1
> > > > >   
> > > > >  lock(&gpu->active_lock);
> > > > >   lock(dma_fence_map);
> > > > >   lock(&gpu->active_lock);
> > > > >  lock(dev_pm_qos_mtx);
> > > > >
> > > > > *** DEADLOCK ***
> > > > >
> > > > >3 locks held by ring0/123:
> > > > > #0: ff8087251170 (&gpu->lock){+.+.}-{3:3}, at: 
> > > > > msm_job_run+0x64/0x150
> > > > > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > > > > msm_job

Re: [Freedreno] [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rob Clark
On Fri, Aug 4, 2023 at 12:11 PM Rafael J. Wysocki  wrote:
>
> On Fri, Aug 4, 2023 at 8:38 PM Rob Clark  wrote:
> >
> > On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  wrote:
> > >
> > > On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> > > >
> > > > From: Rob Clark 
> > > >
> > > > In the process of adding lockdep annotation for drm GPU scheduler's
> > > > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > > > this lockdep splat:
> > > >
> > > >==
> > > >WARNING: possible circular locking dependency detected
> > > >6.2.0-rc8-debug+ #558 Tainted: GW
> > > >--
> > > >ring0/125 is trying to acquire lock:
> > > >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > > > dev_pm_qos_update_request+0x38/0x68
> > > >
> > > >but task is already holding lock:
> > > >ff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > > > msm_gpu_submit+0xec/0x178
> > > >
> > > >which lock already depends on the new lock.
> > > >
> > > >the existing dependency chain (in reverse order) is:
> > > >
> > > >-> #4 (&gpu->active_lock){+.+.}-{3:3}:
> > > >   __mutex_lock+0xcc/0x3c8
> > > >   mutex_lock_nested+0x30/0x44
> > > >   msm_gpu_submit+0xec/0x178
> > > >   msm_job_run+0x78/0x150
> > > >   drm_sched_main+0x290/0x370
> > > >   kthread+0xf0/0x100
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #3 (dma_fence_map){}-{0:0}:
> > > >   __dma_fence_might_wait+0x74/0xc0
> > > >   dma_resv_lockdep+0x1f4/0x2f4
> > > >   do_one_initcall+0x104/0x2bc
> > > >   kernel_init_freeable+0x344/0x34c
> > > >   kernel_init+0x30/0x134
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> > > >   fs_reclaim_acquire+0x80/0xa8
> > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > >   __kmalloc+0xd8/0x100
> > > >   topology_parse_cpu_capacity+0x8c/0x178
> > > >   get_cpu_for_node+0x88/0xc4
> > > >   parse_cluster+0x1b0/0x28c
> > > >   parse_cluster+0x8c/0x28c
> > > >   init_cpu_topology+0x168/0x188
> > > >   smp_prepare_cpus+0x24/0xf8
> > > >   kernel_init_freeable+0x18c/0x34c
> > > >   kernel_init+0x30/0x134
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #1 (fs_reclaim){+.+.}-{0:0}:
> > > >   __fs_reclaim_acquire+0x3c/0x48
> > > >   fs_reclaim_acquire+0x54/0xa8
> > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > >   kmalloc_trace+0x50/0xa8
> > > >   dev_pm_qos_constraints_allocate+0x38/0x100
> > > >   __dev_pm_qos_add_request+0xb0/0x1e8
> > > >   dev_pm_qos_add_request+0x58/0x80
> > > >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> > > >   register_cpu+0x12c/0x130
> > > >   topology_init+0xac/0xbc
> > > >   do_one_initcall+0x104/0x2bc
> > > >   kernel_init_freeable+0x344/0x34c
> > > >   kernel_init+0x30/0x134
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> > > >   __lock_acquire+0xe00/0x1060
> > > >   lock_acquire+0x1e0/0x2f8
> > > >   __mutex_lock+0xcc/0x3c8
> > > >   mutex_lock_nested+0x30/0x44
> > > >   dev_pm_qos_update_request+0x38/0x68
> > > >   msm_devfreq_boost+0x40/0x70
> > > >   msm_devfreq_active+0xc0/0xf0
> > > >   msm_gpu_submit+0x10c/0x178
> > > >   msm_job_run+0x78/0x150
> > > >   drm_sched_main+0x290/0x370
> > > >   kthread+0xf0/0x100
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >other info that might help us debug this:
> > > >
> > > >Chain exists of:
> > > >  dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock
> > > >
> > > > Possible unsafe locking scenario:
> > > >
> > > >   CPU0CPU1
> > > >   
> > > >  lock(&gpu->active_lock);
> > > >   lock(dma_fence_map);
> > > >   lock(&gpu->active_lock);
> > > >  lock(dev_pm_qos_mtx);
> > > >
> > > > *** DEADLOCK ***
> > > >
> > > >3 locks held by ring0/123:
> > > > #0: ff8087251170 (&gpu->lock){+.+.}-{3:3}, at: 
> > > > msm_job_run+0x64/0x150
> > > > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > > > msm_job_run+0x68/0x150
> > > > #2: ff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > > > msm_gpu_submit+0xec/0x178
> > > >
> > > >stack backtrace:
> > > >CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
> > > >Hardware name: Google Lazor (rev1 - 2) with LT

Re: [Freedreno] [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rafael J. Wysocki
On Fri, Aug 4, 2023 at 8:38 PM Rob Clark  wrote:
>
> On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  wrote:
> >
> > On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> > >
> > > From: Rob Clark 
> > >
> > > In the process of adding lockdep annotation for drm GPU scheduler's
> > > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > > this lockdep splat:
> > >
> > >==
> > >WARNING: possible circular locking dependency detected
> > >6.2.0-rc8-debug+ #558 Tainted: GW
> > >--
> > >ring0/125 is trying to acquire lock:
> > >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > > dev_pm_qos_update_request+0x38/0x68
> > >
> > >but task is already holding lock:
> > >ff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > > msm_gpu_submit+0xec/0x178
> > >
> > >which lock already depends on the new lock.
> > >
> > >the existing dependency chain (in reverse order) is:
> > >
> > >-> #4 (&gpu->active_lock){+.+.}-{3:3}:
> > >   __mutex_lock+0xcc/0x3c8
> > >   mutex_lock_nested+0x30/0x44
> > >   msm_gpu_submit+0xec/0x178
> > >   msm_job_run+0x78/0x150
> > >   drm_sched_main+0x290/0x370
> > >   kthread+0xf0/0x100
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #3 (dma_fence_map){}-{0:0}:
> > >   __dma_fence_might_wait+0x74/0xc0
> > >   dma_resv_lockdep+0x1f4/0x2f4
> > >   do_one_initcall+0x104/0x2bc
> > >   kernel_init_freeable+0x344/0x34c
> > >   kernel_init+0x30/0x134
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> > >   fs_reclaim_acquire+0x80/0xa8
> > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > >   __kmem_cache_alloc_node+0x60/0x1cc
> > >   __kmalloc+0xd8/0x100
> > >   topology_parse_cpu_capacity+0x8c/0x178
> > >   get_cpu_for_node+0x88/0xc4
> > >   parse_cluster+0x1b0/0x28c
> > >   parse_cluster+0x8c/0x28c
> > >   init_cpu_topology+0x168/0x188
> > >   smp_prepare_cpus+0x24/0xf8
> > >   kernel_init_freeable+0x18c/0x34c
> > >   kernel_init+0x30/0x134
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #1 (fs_reclaim){+.+.}-{0:0}:
> > >   __fs_reclaim_acquire+0x3c/0x48
> > >   fs_reclaim_acquire+0x54/0xa8
> > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > >   __kmem_cache_alloc_node+0x60/0x1cc
> > >   kmalloc_trace+0x50/0xa8
> > >   dev_pm_qos_constraints_allocate+0x38/0x100
> > >   __dev_pm_qos_add_request+0xb0/0x1e8
> > >   dev_pm_qos_add_request+0x58/0x80
> > >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> > >   register_cpu+0x12c/0x130
> > >   topology_init+0xac/0xbc
> > >   do_one_initcall+0x104/0x2bc
> > >   kernel_init_freeable+0x344/0x34c
> > >   kernel_init+0x30/0x134
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> > >   __lock_acquire+0xe00/0x1060
> > >   lock_acquire+0x1e0/0x2f8
> > >   __mutex_lock+0xcc/0x3c8
> > >   mutex_lock_nested+0x30/0x44
> > >   dev_pm_qos_update_request+0x38/0x68
> > >   msm_devfreq_boost+0x40/0x70
> > >   msm_devfreq_active+0xc0/0xf0
> > >   msm_gpu_submit+0x10c/0x178
> > >   msm_job_run+0x78/0x150
> > >   drm_sched_main+0x290/0x370
> > >   kthread+0xf0/0x100
> > >   ret_from_fork+0x10/0x20
> > >
> > >other info that might help us debug this:
> > >
> > >Chain exists of:
> > >  dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock
> > >
> > > Possible unsafe locking scenario:
> > >
> > >   CPU0CPU1
> > >   
> > >  lock(&gpu->active_lock);
> > >   lock(dma_fence_map);
> > >   lock(&gpu->active_lock);
> > >  lock(dev_pm_qos_mtx);
> > >
> > > *** DEADLOCK ***
> > >
> > >3 locks held by ring0/123:
> > > #0: ff8087251170 (&gpu->lock){+.+.}-{3:3}, at: 
> > > msm_job_run+0x64/0x150
> > > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > > msm_job_run+0x68/0x150
> > > #2: ff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > > msm_gpu_submit+0xec/0x178
> > >
> > >stack backtrace:
> > >CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
> > >Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
> > >Call trace:
> > > dump_backtrace.part.0+0xb4/0xf8
> > > show_stack+0x20/0x38
> > > dump_stack_lvl+0x9c/0xd0
> > > dump_stack+0x18/0x34
> > > print_circular_bug+0x1b4/0x1f0
> > > check_noncircular+0x78/0xac
> > > __lock_acquire+0xe00/0x1060
> > > lo

Re: [Freedreno] [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rob Clark
On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  wrote:
>
> On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> >
> > From: Rob Clark 
> >
> > In the process of adding lockdep annotation for drm GPU scheduler's
> > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > this lockdep splat:
> >
> >==
> >WARNING: possible circular locking dependency detected
> >6.2.0-rc8-debug+ #558 Tainted: GW
> >--
> >ring0/125 is trying to acquire lock:
> >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > dev_pm_qos_update_request+0x38/0x68
> >
> >but task is already holding lock:
> >ff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > msm_gpu_submit+0xec/0x178
> >
> >which lock already depends on the new lock.
> >
> >the existing dependency chain (in reverse order) is:
> >
> >-> #4 (&gpu->active_lock){+.+.}-{3:3}:
> >   __mutex_lock+0xcc/0x3c8
> >   mutex_lock_nested+0x30/0x44
> >   msm_gpu_submit+0xec/0x178
> >   msm_job_run+0x78/0x150
> >   drm_sched_main+0x290/0x370
> >   kthread+0xf0/0x100
> >   ret_from_fork+0x10/0x20
> >
> >-> #3 (dma_fence_map){}-{0:0}:
> >   __dma_fence_might_wait+0x74/0xc0
> >   dma_resv_lockdep+0x1f4/0x2f4
> >   do_one_initcall+0x104/0x2bc
> >   kernel_init_freeable+0x344/0x34c
> >   kernel_init+0x30/0x134
> >   ret_from_fork+0x10/0x20
> >
> >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> >   fs_reclaim_acquire+0x80/0xa8
> >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> >   __kmem_cache_alloc_node+0x60/0x1cc
> >   __kmalloc+0xd8/0x100
> >   topology_parse_cpu_capacity+0x8c/0x178
> >   get_cpu_for_node+0x88/0xc4
> >   parse_cluster+0x1b0/0x28c
> >   parse_cluster+0x8c/0x28c
> >   init_cpu_topology+0x168/0x188
> >   smp_prepare_cpus+0x24/0xf8
> >   kernel_init_freeable+0x18c/0x34c
> >   kernel_init+0x30/0x134
> >   ret_from_fork+0x10/0x20
> >
> >-> #1 (fs_reclaim){+.+.}-{0:0}:
> >   __fs_reclaim_acquire+0x3c/0x48
> >   fs_reclaim_acquire+0x54/0xa8
> >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> >   __kmem_cache_alloc_node+0x60/0x1cc
> >   kmalloc_trace+0x50/0xa8
> >   dev_pm_qos_constraints_allocate+0x38/0x100
> >   __dev_pm_qos_add_request+0xb0/0x1e8
> >   dev_pm_qos_add_request+0x58/0x80
> >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> >   register_cpu+0x12c/0x130
> >   topology_init+0xac/0xbc
> >   do_one_initcall+0x104/0x2bc
> >   kernel_init_freeable+0x344/0x34c
> >   kernel_init+0x30/0x134
> >   ret_from_fork+0x10/0x20
> >
> >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> >   __lock_acquire+0xe00/0x1060
> >   lock_acquire+0x1e0/0x2f8
> >   __mutex_lock+0xcc/0x3c8
> >   mutex_lock_nested+0x30/0x44
> >   dev_pm_qos_update_request+0x38/0x68
> >   msm_devfreq_boost+0x40/0x70
> >   msm_devfreq_active+0xc0/0xf0
> >   msm_gpu_submit+0x10c/0x178
> >   msm_job_run+0x78/0x150
> >   drm_sched_main+0x290/0x370
> >   kthread+0xf0/0x100
> >   ret_from_fork+0x10/0x20
> >
> >other info that might help us debug this:
> >
> >Chain exists of:
> >  dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock
> >
> > Possible unsafe locking scenario:
> >
> >   CPU0CPU1
> >   
> >  lock(&gpu->active_lock);
> >   lock(dma_fence_map);
> >   lock(&gpu->active_lock);
> >  lock(dev_pm_qos_mtx);
> >
> > *** DEADLOCK ***
> >
> >3 locks held by ring0/123:
> > #0: ff8087251170 (&gpu->lock){+.+.}-{3:3}, at: 
> > msm_job_run+0x64/0x150
> > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > msm_job_run+0x68/0x150
> > #2: ff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> > msm_gpu_submit+0xec/0x178
> >
> >stack backtrace:
> >CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
> >Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
> >Call trace:
> > dump_backtrace.part.0+0xb4/0xf8
> > show_stack+0x20/0x38
> > dump_stack_lvl+0x9c/0xd0
> > dump_stack+0x18/0x34
> > print_circular_bug+0x1b4/0x1f0
> > check_noncircular+0x78/0xac
> > __lock_acquire+0xe00/0x1060
> > lock_acquire+0x1e0/0x2f8
> > __mutex_lock+0xcc/0x3c8
> > mutex_lock_nested+0x30/0x44
> > dev_pm_qos_update_request+0x38/0x68
> > msm_devfreq_boost+0x40/0x70
> > msm_devfreq_active+0xc0/0xf0
> > msm_gpu_submit+0x10c/0x178
> > msm_job_run+0x78/0x150
> > drm_sched_main+0x29

Re: [Freedreno] [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rafael J. Wysocki
On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
>
> From: Rob Clark 
>
> In the process of adding lockdep annotation for drm GPU scheduler's
> job_run() to detect potential deadlock against shrinker/reclaim, I hit
> this lockdep splat:
>
>==
>WARNING: possible circular locking dependency detected
>6.2.0-rc8-debug+ #558 Tainted: GW
>--
>ring0/125 is trying to acquire lock:
>ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> dev_pm_qos_update_request+0x38/0x68
>
>but task is already holding lock:
>ff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> msm_gpu_submit+0xec/0x178
>
>which lock already depends on the new lock.
>
>the existing dependency chain (in reverse order) is:
>
>-> #4 (&gpu->active_lock){+.+.}-{3:3}:
>   __mutex_lock+0xcc/0x3c8
>   mutex_lock_nested+0x30/0x44
>   msm_gpu_submit+0xec/0x178
>   msm_job_run+0x78/0x150
>   drm_sched_main+0x290/0x370
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>-> #3 (dma_fence_map){}-{0:0}:
>   __dma_fence_might_wait+0x74/0xc0
>   dma_resv_lockdep+0x1f4/0x2f4
>   do_one_initcall+0x104/0x2bc
>   kernel_init_freeable+0x344/0x34c
>   kernel_init+0x30/0x134
>   ret_from_fork+0x10/0x20
>
>-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
>   fs_reclaim_acquire+0x80/0xa8
>   slab_pre_alloc_hook.constprop.0+0x40/0x25c
>   __kmem_cache_alloc_node+0x60/0x1cc
>   __kmalloc+0xd8/0x100
>   topology_parse_cpu_capacity+0x8c/0x178
>   get_cpu_for_node+0x88/0xc4
>   parse_cluster+0x1b0/0x28c
>   parse_cluster+0x8c/0x28c
>   init_cpu_topology+0x168/0x188
>   smp_prepare_cpus+0x24/0xf8
>   kernel_init_freeable+0x18c/0x34c
>   kernel_init+0x30/0x134
>   ret_from_fork+0x10/0x20
>
>-> #1 (fs_reclaim){+.+.}-{0:0}:
>   __fs_reclaim_acquire+0x3c/0x48
>   fs_reclaim_acquire+0x54/0xa8
>   slab_pre_alloc_hook.constprop.0+0x40/0x25c
>   __kmem_cache_alloc_node+0x60/0x1cc
>   kmalloc_trace+0x50/0xa8
>   dev_pm_qos_constraints_allocate+0x38/0x100
>   __dev_pm_qos_add_request+0xb0/0x1e8
>   dev_pm_qos_add_request+0x58/0x80
>   dev_pm_qos_expose_latency_limit+0x60/0x13c
>   register_cpu+0x12c/0x130
>   topology_init+0xac/0xbc
>   do_one_initcall+0x104/0x2bc
>   kernel_init_freeable+0x344/0x34c
>   kernel_init+0x30/0x134
>   ret_from_fork+0x10/0x20
>
>-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
>   __lock_acquire+0xe00/0x1060
>   lock_acquire+0x1e0/0x2f8
>   __mutex_lock+0xcc/0x3c8
>   mutex_lock_nested+0x30/0x44
>   dev_pm_qos_update_request+0x38/0x68
>   msm_devfreq_boost+0x40/0x70
>   msm_devfreq_active+0xc0/0xf0
>   msm_gpu_submit+0x10c/0x178
>   msm_job_run+0x78/0x150
>   drm_sched_main+0x290/0x370
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>other info that might help us debug this:
>
>Chain exists of:
>  dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock
>
> Possible unsafe locking scenario:
>
>   CPU0CPU1
>   
>  lock(&gpu->active_lock);
>   lock(dma_fence_map);
>   lock(&gpu->active_lock);
>  lock(dev_pm_qos_mtx);
>
> *** DEADLOCK ***
>
>3 locks held by ring0/123:
> #0: ff8087251170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150
> #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> msm_job_run+0x68/0x150
> #2: ff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: 
> msm_gpu_submit+0xec/0x178
>
>stack backtrace:
>CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
>Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
>Call trace:
> dump_backtrace.part.0+0xb4/0xf8
> show_stack+0x20/0x38
> dump_stack_lvl+0x9c/0xd0
> dump_stack+0x18/0x34
> print_circular_bug+0x1b4/0x1f0
> check_noncircular+0x78/0xac
> __lock_acquire+0xe00/0x1060
> lock_acquire+0x1e0/0x2f8
> __mutex_lock+0xcc/0x3c8
> mutex_lock_nested+0x30/0x44
> dev_pm_qos_update_request+0x38/0x68
> msm_devfreq_boost+0x40/0x70
> msm_devfreq_active+0xc0/0xf0
> msm_gpu_submit+0x10c/0x178
> msm_job_run+0x78/0x150
> drm_sched_main+0x290/0x370
> kthread+0xf0/0x100
> ret_from_fork+0x10/0x20
>
> The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or
> freq change) path, but it is also held across allocations that could
> recurse into shrinker.
>
> Solve this by changing dev_pm_qos_constraints_allocate() into a function
> that can b

Re: [Freedreno] [PATCH] drm/msm/a6xx: Fix GMU lockdep splat

2023-08-04 Thread Doug Anderson
Hi,

On Thu, Aug 3, 2023 at 10:34 AM Rob Clark  wrote:
>
> From: Rob Clark 
>
> For normal GPU devfreq, we need to acquire the GMU lock while already
> holding devfreq locks.  But in the teardown path, we were calling
> dev_pm_domain_detach() while already holding the GMU lock, resulting in
> this lockdep splat:
>
>==
>WARNING: possible circular locking dependency detected
>6.4.3-debug+ #3 Not tainted
>--
>ring0/391 is trying to acquire lock:
>ff80a025c078 (&devfreq->lock){+.+.}-{3:3}, at: 
> qos_notifier_call+0x30/0x74
>
>but task is already holding lock:
>ff809b8c1ce8 (&(c->notifiers)->rwsem){}-{3:3}, at: 
> blocking_notifier_call_chain+0x34/0x78
>
>which lock already depends on the new lock.
>
>the existing dependency chain (in reverse order) is:
>
>-> #4 (&(c->notifiers)->rwsem){}-{3:3}:
>   down_write+0x58/0x74
>   __blocking_notifier_chain_register+0x64/0x84
>   blocking_notifier_chain_register+0x1c/0x28
>   freq_qos_add_notifier+0x5c/0x7c
>   dev_pm_qos_add_notifier+0xd4/0xf0
>   devfreq_add_device+0x42c/0x560
>   devm_devfreq_add_device+0x6c/0xb8
>   msm_devfreq_init+0xa8/0x16c [msm]
>   msm_gpu_init+0x368/0x54c [msm]
>   adreno_gpu_init+0x248/0x2b0 [msm]
>   a6xx_gpu_init+0x2d0/0x384 [msm]
>   adreno_bind+0x264/0x2bc [msm]
>   component_bind_all+0x124/0x1f4
>   msm_drm_bind+0x2d0/0x5f4 [msm]
>   try_to_bring_up_aggregate_device+0x88/0x1a4
>   __component_add+0xd4/0x128
>   component_add+0x1c/0x28
>   dp_display_probe+0x37c/0x3c0 [msm]
>   platform_probe+0x70/0xc0
>   really_probe+0x148/0x280
>   __driver_probe_device+0xfc/0x114
>   driver_probe_device+0x44/0x100
>   __device_attach_driver+0x64/0xdc
>   bus_for_each_drv+0xb0/0xd8
>   __device_attach+0xe4/0x140
>   device_initial_probe+0x1c/0x28
>   bus_probe_device+0x44/0xb0
>   deferred_probe_work_func+0xb0/0xc8
>   process_one_work+0x288/0x3d8
>   worker_thread+0x1f0/0x260
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>-> #3 (dev_pm_qos_mtx){+.+.}-{3:3}:
>   __mutex_lock+0xc8/0x388
>   mutex_lock_nested+0x2c/0x38
>   dev_pm_qos_remove_notifier+0x3c/0xc8
>   genpd_remove_device+0x40/0x11c
>   genpd_dev_pm_detach+0x88/0x130
>   dev_pm_domain_detach+0x2c/0x3c
>   a6xx_gmu_remove+0x44/0xdc [msm]
>   a6xx_destroy+0x7c/0xa4 [msm]
>   adreno_unbind+0x50/0x64 [msm]
>   component_unbind+0x44/0x64
>   component_unbind_all+0xb4/0xbc
>   msm_drm_uninit.isra.0+0x124/0x17c [msm]
>   msm_drm_bind+0x340/0x5f4 [msm]
>   try_to_bring_up_aggregate_device+0x88/0x1a4
>   __component_add+0xd4/0x128
>   component_add+0x1c/0x28
>   dp_display_probe+0x37c/0x3c0 [msm]
>   platform_probe+0x70/0xc0
>   really_probe+0x148/0x280
>   __driver_probe_device+0xfc/0x114
>   driver_probe_device+0x44/0x100
>   __device_attach_driver+0x64/0xdc
>   bus_for_each_drv+0xb0/0xd8
>   __device_attach+0xe4/0x140
>   device_initial_probe+0x1c/0x28
>   bus_probe_device+0x44/0xb0
>   deferred_probe_work_func+0xb0/0xc8
>   process_one_work+0x288/0x3d8
>   worker_thread+0x1f0/0x260
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>-> #2 (&a6xx_gpu->gmu.lock){+.+.}-{3:3}:
>   __mutex_lock+0xc8/0x388
>   mutex_lock_nested+0x2c/0x38
>   a6xx_gpu_set_freq+0x38/0x64 [msm]
>   msm_devfreq_target+0x170/0x18c [msm]
>   devfreq_set_target+0x90/0x1e4
>   devfreq_update_target+0xb4/0xf0
>   update_devfreq+0x1c/0x28
>   devfreq_monitor+0x3c/0x10c
>   process_one_work+0x288/0x3d8
>   worker_thread+0x1f0/0x260
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>-> #1 (&df->lock){+.+.}-{3:3}:
>   __mutex_lock+0xc8/0x388
>   mutex_lock_nested+0x2c/0x38
>   msm_devfreq_get_dev_status+0x4c/0x104 [msm]
>   devfreq_simple_ondemand_func+0x5c/0x128
>   devfreq_update_target+0x68/0xf0
>   update_devfreq+0x1c/0x28
>   devfreq_monitor+0x3c/0x10c
>   process_one_work+0x288/0x3d8
>   worker_thread+0x1f0/0x260
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>-> #0 (&devfreq->lock){+.+.}-{3:3}:
>   __lock_acquire+0xdf8/0x109c
>   lock_acquire+0x234/0x284
>   __mutex_lock+0xc8/0x388
>   mutex_lock_nested+0x2c/0x38
>   qos_notifier_call+0x30/0x74
>   qos_min_notifier_call+0x1c/0x28
>   notifier_call_cha

Re: [Freedreno] [PATCH v5 0/8] drm/msm/dpu: change interrupts code to make 0 be the no IRQ

2023-08-04 Thread Dmitry Baryshkov


On Wed, 02 Aug 2023 13:04:18 +0300, Dmitry Baryshkov wrote:
> Having an explicit init of interrupt fields to -1 for not existing IRQs
> makes it easier to forget and/or miss such initialisation, resulting in
> a wrong interrupt definition.
> 
> Instead shift all IRQ indices to turn '0' to be the non-existing IRQ.
> 
> Dependencies: [1]
> 
> [...]

Applied, thanks!

[1/8] drm/msm/dpu: fix the irq index in dpu_encoder_phys_wb_wait_for_commit_done
  https://gitlab.freedesktop.org/lumag/msm/-/commit/d93cf453f51d

Best regards,
-- 
Dmitry Baryshkov 


Re: [Freedreno] [PATCH v2] drm/msm/dpu: Drop encoder vsync_event

2023-08-04 Thread Dmitry Baryshkov


On Wed, 02 Aug 2023 10:01:13 -0700, Jessica Zhang wrote:
> Drop vsync_event and vsync_event_work handlers as they are unnecessary.
> In addition drop the dpu_enc_ktime_template event class as it will be
> unused after the vsync_event handlers are dropped.
> 
> 

Applied, thanks!

[1/1] drm/msm/dpu: Drop encoder vsync_event
  https://gitlab.freedesktop.org/lumag/msm/-/commit/fdcb8fe0c9f0

Best regards,
-- 
Dmitry Baryshkov 


Re: [Freedreno] [PATCH] drm/msm/mdp5: Don't leak some plane state

2023-08-04 Thread Dmitry Baryshkov


On Thu, 03 Aug 2023 22:45:21 +0200, Daniel Vetter wrote:
> Apparently no one noticed that mdp5 plane states leak like a sieve
> ever since we introduced plane_state->commit refcount a few years ago
> in 21a01abbe32a ("drm/atomic: Fix freeing connector/plane state too
> early by tracking commits, v3.")
> 
> Fix it by using the right helpers.
> 
> [...]

Applied, thanks!

[1/1] drm/msm/mdp5: Don't leak some plane state
  https://gitlab.freedesktop.org/lumag/msm/-/commit/fd0ad3b2365c

Best regards,
-- 
Dmitry Baryshkov 


Re: [Freedreno] [PATCH] drm/msm/dpu: clean up some inconsistent indenting

2023-08-04 Thread Dmitry Baryshkov


On Fri, 04 Aug 2023 15:57:46 +0800, Jiapeng Chong wrote:
> No functional modification involved.
> 
> drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c:183 dpu_core_perf_crtc_check() 
> warn: inconsistent indenting.
> 
> 

Applied, thanks!

[1/1] drm/msm/dpu: clean up some inconsistent indenting
  https://gitlab.freedesktop.org/lumag/msm/-/commit/b0fe70105056

Best regards,
-- 
Dmitry Baryshkov 


Re: [Freedreno] [PATCH] drm/msm/dpu: initialise clk_rate to 0 in _dpu_core_perf_get_core_clk_rate

2023-08-04 Thread Dmitry Baryshkov


On Fri, 04 Aug 2023 12:48:04 +0300, Dmitry Baryshkov wrote:
> When removing the core perf tune overrides, I also occasionaly removed the
> initialisation of the clk_rate variable. Initialise it to 0 to let max()
> correctly calculate the maximum of requested clock rates.
> 
> 

Applied, thanks!

[1/1] drm/msm/dpu: initialise clk_rate to 0 in _dpu_core_perf_get_core_clk_rate
  https://gitlab.freedesktop.org/lumag/msm/-/commit/34202be95237

Best regards,
-- 
Dmitry Baryshkov 


Re: [Freedreno] [PATCH v2 13/13] drm/msm/adreno: Switch to chip-id for identifying GPU

2023-08-04 Thread Dmitry Baryshkov
On Fri, 28 Jul 2023 at 00:23, Rob Clark  wrote:
>
> From: Rob Clark 
>
> Since the revision becomes an opaque identifier with future GPUs, move
> away from treating different ranges of bits as having a given meaning.
> This means that we need to explicitly list different patch revisions in
> the device table.
>
> Signed-off-by: Rob Clark 
> ---
>  drivers/gpu/drm/msm/adreno/a4xx_gpu.c  |   2 +-
>  drivers/gpu/drm/msm/adreno/a5xx_gpu.c  |   2 +-
>  drivers/gpu/drm/msm/adreno/a5xx_power.c|   2 +-
>  drivers/gpu/drm/msm/adreno/a6xx_gmu.c  |  14 ++-
>  drivers/gpu/drm/msm/adreno/adreno_device.c | 137 +++--
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c|  14 +--
>  drivers/gpu/drm/msm/adreno/adreno_gpu.h|  49 
>  7 files changed, 115 insertions(+), 105 deletions(-)

Reviewed-by: Dmitry Baryshkov 


-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH] drm/msm/dpu: initialise clk_rate to 0 in _dpu_core_perf_get_core_clk_rate

2023-08-04 Thread Abhinav Kumar




On 8/4/2023 2:48 AM, Dmitry Baryshkov wrote:

When removing the core perf tune overrides, I also occasionaly removed the
initialisation of the clk_rate variable. Initialise it to 0 to let max()
correctly calculate the maximum of requested clock rates.

Reported-by: Dan Carpenter 
Fixes: 6a4bc73915af ("drm/msm/dpu: drop separate dpu_core_perf_tune overrides")
Signed-off-by: Dmitry Baryshkov 
---
  drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 1 +
  1 file changed, 1 insertion(+)




Reviewed-by: Abhinav Kumar 


Re: [Freedreno] [PATCH RFC v5 02/10] drm: Introduce solid fill DRM plane property

2023-08-04 Thread Dmitry Baryshkov
On Fri, 4 Aug 2023 at 16:44, Sebastian Wick  wrote:
>
> On Fri, Aug 4, 2023 at 3:27 PM Dmitry Baryshkov
>  wrote:
> >
> > On Fri, 28 Jul 2023 at 20:03, Jessica Zhang  
> > wrote:
> > >
> > > Document and add support for solid_fill property to drm_plane. In
> > > addition, add support for setting and getting the values for solid_fill.
> > >
> > > To enable solid fill planes, userspace must assign a property blob to
> > > the "solid_fill" plane property containing the following information:
> > >
> > > struct drm_mode_solid_fill {
> > > u32 version;
> > > u32 r, g, b;
> > > };
> > >
> > > Signed-off-by: Jessica Zhang 
> > > ---
> > >  drivers/gpu/drm/drm_atomic_state_helper.c |  9 +
> > >  drivers/gpu/drm/drm_atomic_uapi.c | 55 
> > > +++
> > >  drivers/gpu/drm/drm_blend.c   | 30 +
> > >  include/drm/drm_blend.h   |  1 +
> > >  include/drm/drm_plane.h   | 35 
> > >  include/uapi/drm/drm_mode.h   | 24 ++
> > >  6 files changed, 154 insertions(+)
> > >
> >
> > [skipped most of the patch]
> >
> > > diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
> > > index 43691058d28f..53c8efa5ad7f 100644
> > > --- a/include/uapi/drm/drm_mode.h
> > > +++ b/include/uapi/drm/drm_mode.h
> > > @@ -259,6 +259,30 @@ struct drm_mode_modeinfo {
> > > char name[DRM_DISPLAY_MODE_LEN];
> > >  };
> > >
> > > +/**
> > > + * struct drm_mode_solid_fill - User info for solid fill planes
> > > + *
> > > + * This is the userspace API solid fill information structure.
> > > + *
> > > + * Userspace can enable solid fill planes by assigning the plane 
> > > "solid_fill"
> > > + * property to a blob containing a single drm_mode_solid_fill struct 
> > > populated with an RGB323232
> > > + * color and setting the pixel source to "SOLID_FILL".
> > > + *
> > > + * For information on the plane property, see 
> > > drm_plane_create_solid_fill_property()
> > > + *
> > > + * @version: Version of the blob. Currently, there is only support for 
> > > version == 1
> > > + * @r: Red color value of single pixel
> > > + * @g: Green color value of single pixel
> > > + * @b: Blue color value of single pixel
> > > + */
> > > +struct drm_mode_solid_fill {
> > > +   __u32 version;
> > > +   __u32 r;
> > > +   __u32 g;
> > > +   __u32 b;
> >
> > Another thought about the drm_mode_solid_fill uABI. I still think we
> > should add alpha here. The reason is the following:
> >
> > It is true that we have  drm_plane_state::alpha and the plane's
> > "alpha" property. However it is documented as "the plane-wide opacity
> > [...] It can be combined with pixel alpha. The pixel values in the
> > framebuffers are expected to not be pre-multiplied by the global alpha
> > associated to the plane.".
> >
> > I can imagine a use case, when a user might want to enable plane-wide
> > opacity, set "pixel blend mode" to "Coverage" and then switch between
> > partially opaque framebuffer and partially opaque solid-fill without
> > touching the plane's alpha value.
>
> The only reason I see against this is that there might be some
> hardware which supports only RGB but not alpha on planes and they
> could then not use this property.

Fair enough.

> Maybe another COLOR_FILL enum value
> with alpha might be better? Maybe just doing the alpha via the alpha
> property is good enough.

One of our customers has a use case for setting the opaque solid fill,
while keeping the plane's alpha intact.

-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH RFC v5 02/10] drm: Introduce solid fill DRM plane property

2023-08-04 Thread Sebastian Wick
On Fri, Aug 4, 2023 at 3:27 PM Dmitry Baryshkov
 wrote:
>
> On Fri, 28 Jul 2023 at 20:03, Jessica Zhang  wrote:
> >
> > Document and add support for solid_fill property to drm_plane. In
> > addition, add support for setting and getting the values for solid_fill.
> >
> > To enable solid fill planes, userspace must assign a property blob to
> > the "solid_fill" plane property containing the following information:
> >
> > struct drm_mode_solid_fill {
> > u32 version;
> > u32 r, g, b;
> > };
> >
> > Signed-off-by: Jessica Zhang 
> > ---
> >  drivers/gpu/drm/drm_atomic_state_helper.c |  9 +
> >  drivers/gpu/drm/drm_atomic_uapi.c | 55 
> > +++
> >  drivers/gpu/drm/drm_blend.c   | 30 +
> >  include/drm/drm_blend.h   |  1 +
> >  include/drm/drm_plane.h   | 35 
> >  include/uapi/drm/drm_mode.h   | 24 ++
> >  6 files changed, 154 insertions(+)
> >
>
> [skipped most of the patch]
>
> > diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
> > index 43691058d28f..53c8efa5ad7f 100644
> > --- a/include/uapi/drm/drm_mode.h
> > +++ b/include/uapi/drm/drm_mode.h
> > @@ -259,6 +259,30 @@ struct drm_mode_modeinfo {
> > char name[DRM_DISPLAY_MODE_LEN];
> >  };
> >
> > +/**
> > + * struct drm_mode_solid_fill - User info for solid fill planes
> > + *
> > + * This is the userspace API solid fill information structure.
> > + *
> > + * Userspace can enable solid fill planes by assigning the plane 
> > "solid_fill"
> > + * property to a blob containing a single drm_mode_solid_fill struct 
> > populated with an RGB323232
> > + * color and setting the pixel source to "SOLID_FILL".
> > + *
> > + * For information on the plane property, see 
> > drm_plane_create_solid_fill_property()
> > + *
> > + * @version: Version of the blob. Currently, there is only support for 
> > version == 1
> > + * @r: Red color value of single pixel
> > + * @g: Green color value of single pixel
> > + * @b: Blue color value of single pixel
> > + */
> > +struct drm_mode_solid_fill {
> > +   __u32 version;
> > +   __u32 r;
> > +   __u32 g;
> > +   __u32 b;
>
> Another thought about the drm_mode_solid_fill uABI. I still think we
> should add alpha here. The reason is the following:
>
> It is true that we have  drm_plane_state::alpha and the plane's
> "alpha" property. However it is documented as "the plane-wide opacity
> [...] It can be combined with pixel alpha. The pixel values in the
> framebuffers are expected to not be pre-multiplied by the global alpha
> associated to the plane.".
>
> I can imagine a use case, when a user might want to enable plane-wide
> opacity, set "pixel blend mode" to "Coverage" and then switch between
> partially opaque framebuffer and partially opaque solid-fill without
> touching the plane's alpha value.

The only reason I see against this is that there might be some
hardware which supports only RGB but not alpha on planes and they
could then not use this property. Maybe another COLOR_FILL enum value
with alpha might be better? Maybe just doing the alpha via the alpha
property is good enough.

>
> --
> With best wishes
> Dmitry
>



Re: [Freedreno] [PATCH RFC v5 02/10] drm: Introduce solid fill DRM plane property

2023-08-04 Thread Sebastian Wick
On Mon, Jul 31, 2023 at 6:01 AM Dmitry Baryshkov
 wrote:
>
> On 28/07/2023 20:02, Jessica Zhang wrote:
> > Document and add support for solid_fill property to drm_plane. In
> > addition, add support for setting and getting the values for solid_fill.
> >
> > To enable solid fill planes, userspace must assign a property blob to
> > the "solid_fill" plane property containing the following information:
> >
> > struct drm_mode_solid_fill {
> >   u32 version;
> >   u32 r, g, b;
> > };
> >
> > Signed-off-by: Jessica Zhang 
> > ---
> >   drivers/gpu/drm/drm_atomic_state_helper.c |  9 +
> >   drivers/gpu/drm/drm_atomic_uapi.c | 55 
> > +++
> >   drivers/gpu/drm/drm_blend.c   | 30 +
> >   include/drm/drm_blend.h   |  1 +
> >   include/drm/drm_plane.h   | 35 
> >   include/uapi/drm/drm_mode.h   | 24 ++
> >   6 files changed, 154 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_atomic_state_helper.c 
> > b/drivers/gpu/drm/drm_atomic_state_helper.c
> > index 01638c51ce0a..86fb876efbe6 100644
> > --- a/drivers/gpu/drm/drm_atomic_state_helper.c
> > +++ b/drivers/gpu/drm/drm_atomic_state_helper.c
> > @@ -254,6 +254,11 @@ void __drm_atomic_helper_plane_state_reset(struct 
> > drm_plane_state *plane_state,
> >   plane_state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
> >   plane_state->pixel_source = DRM_PLANE_PIXEL_SOURCE_FB;
> >
> > + if (plane_state->solid_fill_blob) {
> > + drm_property_blob_put(plane_state->solid_fill_blob);
> > + plane_state->solid_fill_blob = NULL;
> > + }
> > +
> >   if (plane->color_encoding_property) {
> >   if (!drm_object_property_get_default_value(&plane->base,
> >  
> > plane->color_encoding_property,
> > @@ -336,6 +341,9 @@ void __drm_atomic_helper_plane_duplicate_state(struct 
> > drm_plane *plane,
> >   if (state->fb)
> >   drm_framebuffer_get(state->fb);
> >
> > + if (state->solid_fill_blob)
> > + drm_property_blob_get(state->solid_fill_blob);
> > +
> >   state->fence = NULL;
> >   state->commit = NULL;
> >   state->fb_damage_clips = NULL;
> > @@ -385,6 +393,7 @@ void __drm_atomic_helper_plane_destroy_state(struct 
> > drm_plane_state *state)
> >   drm_crtc_commit_put(state->commit);
> >
> >   drm_property_blob_put(state->fb_damage_clips);
> > + drm_property_blob_put(state->solid_fill_blob);
> >   }
> >   EXPORT_SYMBOL(__drm_atomic_helper_plane_destroy_state);
> >
> > diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
> > b/drivers/gpu/drm/drm_atomic_uapi.c
> > index 454f980e16c9..039686c78c2a 100644
> > --- a/drivers/gpu/drm/drm_atomic_uapi.c
> > +++ b/drivers/gpu/drm/drm_atomic_uapi.c
> > @@ -316,6 +316,51 @@ drm_atomic_set_crtc_for_connector(struct 
> > drm_connector_state *conn_state,
> >   }
> >   EXPORT_SYMBOL(drm_atomic_set_crtc_for_connector);
> >
> > +static int drm_atomic_set_solid_fill_prop(struct drm_plane_state *state,
> > + struct drm_property_blob *blob)
> > +{
> > + int blob_version;
> > +
> > + if (blob == state->solid_fill_blob)
> > + return 0;
> > +
> > + if (blob) {
> > + struct drm_mode_solid_fill *user_info = (struct 
> > drm_mode_solid_fill *)blob->data;
> > +
> > + if (blob->length != sizeof(struct drm_mode_solid_fill)) {
> > + drm_dbg_atomic(state->plane->dev,
> > +"[PLANE:%d:%s] bad solid fill blob 
> > length: %zu\n",
> > +state->plane->base.id, 
> > state->plane->name,
> > +blob->length);
> > + return -EINVAL;
> > + }
> > +
> > + blob_version = user_info->version;
>
> I remember that we had versioning for quite some time. However it stroke
> me while reviewing that we do not a way to negotiate supported
> SOLID_FILL versions. However as we now have support for different
> pixel_source properties, maybe we can drop version completely. If at
> some point a driver needs to support different kind of SOLID_FILL
> property (consider v2), we can add new pixel source to the enum.

Agreed. Let's drop the versioning.

>
> > +
> > + /* Add more versions if necessary */
> > + if (blob_version == 1) {
> > + state->solid_fill.r = user_info->r;
> > + state->solid_fill.g = user_info->g;
> > + state->solid_fill.b = user_info->b;
> > + } else {
> > + drm_dbg_atomic(state->plane->dev,
> > +"[PLANE:%d:%s] unsupported solid fill 
> > version (version=%d)\n",
> > +state->plane->base.id, 
> > state->plane->name,
> > +  

Re: [Freedreno] [PATCH RFC v5 02/10] drm: Introduce solid fill DRM plane property

2023-08-04 Thread Dmitry Baryshkov
On Fri, 28 Jul 2023 at 20:03, Jessica Zhang  wrote:
>
> Document and add support for solid_fill property to drm_plane. In
> addition, add support for setting and getting the values for solid_fill.
>
> To enable solid fill planes, userspace must assign a property blob to
> the "solid_fill" plane property containing the following information:
>
> struct drm_mode_solid_fill {
> u32 version;
> u32 r, g, b;
> };
>
> Signed-off-by: Jessica Zhang 
> ---
>  drivers/gpu/drm/drm_atomic_state_helper.c |  9 +
>  drivers/gpu/drm/drm_atomic_uapi.c | 55 
> +++
>  drivers/gpu/drm/drm_blend.c   | 30 +
>  include/drm/drm_blend.h   |  1 +
>  include/drm/drm_plane.h   | 35 
>  include/uapi/drm/drm_mode.h   | 24 ++
>  6 files changed, 154 insertions(+)
>

[skipped most of the patch]

> diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
> index 43691058d28f..53c8efa5ad7f 100644
> --- a/include/uapi/drm/drm_mode.h
> +++ b/include/uapi/drm/drm_mode.h
> @@ -259,6 +259,30 @@ struct drm_mode_modeinfo {
> char name[DRM_DISPLAY_MODE_LEN];
>  };
>
> +/**
> + * struct drm_mode_solid_fill - User info for solid fill planes
> + *
> + * This is the userspace API solid fill information structure.
> + *
> + * Userspace can enable solid fill planes by assigning the plane "solid_fill"
> + * property to a blob containing a single drm_mode_solid_fill struct 
> populated with an RGB323232
> + * color and setting the pixel source to "SOLID_FILL".
> + *
> + * For information on the plane property, see 
> drm_plane_create_solid_fill_property()
> + *
> + * @version: Version of the blob. Currently, there is only support for 
> version == 1
> + * @r: Red color value of single pixel
> + * @g: Green color value of single pixel
> + * @b: Blue color value of single pixel
> + */
> +struct drm_mode_solid_fill {
> +   __u32 version;
> +   __u32 r;
> +   __u32 g;
> +   __u32 b;

Another thought about the drm_mode_solid_fill uABI. I still think we
should add alpha here. The reason is the following:

It is true that we have  drm_plane_state::alpha and the plane's
"alpha" property. However it is documented as "the plane-wide opacity
[...] It can be combined with pixel alpha. The pixel values in the
framebuffers are expected to not be pre-multiplied by the global alpha
associated to the plane.".

I can imagine a use case, when a user might want to enable plane-wide
opacity, set "pixel blend mode" to "Coverage" and then switch between
partially opaque framebuffer and partially opaque solid-fill without
touching the plane's alpha value.

-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH] drm/msm/dpu: clean up some inconsistent indenting

2023-08-04 Thread Dmitry Baryshkov
On Fri, 4 Aug 2023 at 10:57, Jiapeng Chong
 wrote:
>
> No functional modification involved.
>
> drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c:183 dpu_core_perf_crtc_check() 
> warn: inconsistent indenting.
>
> Reported-by: Abaci Robot 
> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6096
> Signed-off-by: Jiapeng Chong 
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)

Reviewed-by: Dmitry Baryshkov 




-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH RFC v5 01/10] drm: Introduce pixel_source DRM plane property

2023-08-04 Thread Sebastian Wick
On Fri, Jul 28, 2023 at 7:03 PM Jessica Zhang  wrote:
>
> Add support for pixel_source property to drm_plane and related
> documentation. In addition, force pixel_source to
> DRM_PLANE_PIXEL_SOURCE_FB in DRM_IOCTL_MODE_SETPLANE as to not break
> legacy userspace.
>
> This enum property will allow user to specify a pixel source for the
> plane. Possible pixel sources will be defined in the
> drm_plane_pixel_source enum.
>
> The current possible pixel sources are DRM_PLANE_PIXEL_SOURCE_NONE and
> DRM_PLANE_PIXEL_SOURCE_FB with *_PIXEL_SOURCE_FB being the default value.
>
> Signed-off-by: Jessica Zhang 
> ---
>  drivers/gpu/drm/drm_atomic_state_helper.c |  1 +
>  drivers/gpu/drm/drm_atomic_uapi.c |  4 ++
>  drivers/gpu/drm/drm_blend.c   | 85 
> +++
>  drivers/gpu/drm/drm_plane.c   |  3 ++
>  include/drm/drm_blend.h   |  2 +
>  include/drm/drm_plane.h   | 21 
>  6 files changed, 116 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_atomic_state_helper.c 
> b/drivers/gpu/drm/drm_atomic_state_helper.c
> index 784e63d70a42..01638c51ce0a 100644
> --- a/drivers/gpu/drm/drm_atomic_state_helper.c
> +++ b/drivers/gpu/drm/drm_atomic_state_helper.c
> @@ -252,6 +252,7 @@ void __drm_atomic_helper_plane_state_reset(struct 
> drm_plane_state *plane_state,
>
> plane_state->alpha = DRM_BLEND_ALPHA_OPAQUE;
> plane_state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
> +   plane_state->pixel_source = DRM_PLANE_PIXEL_SOURCE_FB;
>
> if (plane->color_encoding_property) {
> if (!drm_object_property_get_default_value(&plane->base,
> diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
> b/drivers/gpu/drm/drm_atomic_uapi.c
> index d867e7f9f2cd..454f980e16c9 100644
> --- a/drivers/gpu/drm/drm_atomic_uapi.c
> +++ b/drivers/gpu/drm/drm_atomic_uapi.c
> @@ -544,6 +544,8 @@ static int drm_atomic_plane_set_property(struct drm_plane 
> *plane,
> state->src_w = val;
> } else if (property == config->prop_src_h) {
> state->src_h = val;
> +   } else if (property == plane->pixel_source_property) {
> +   state->pixel_source = val;
> } else if (property == plane->alpha_property) {
> state->alpha = val;
> } else if (property == plane->blend_mode_property) {
> @@ -616,6 +618,8 @@ drm_atomic_plane_get_property(struct drm_plane *plane,
> *val = state->src_w;
> } else if (property == config->prop_src_h) {
> *val = state->src_h;
> +   } else if (property == plane->pixel_source_property) {
> +   *val = state->pixel_source;
> } else if (property == plane->alpha_property) {
> *val = state->alpha;
> } else if (property == plane->blend_mode_property) {
> diff --git a/drivers/gpu/drm/drm_blend.c b/drivers/gpu/drm/drm_blend.c
> index 6e74de833466..c500310a3d09 100644
> --- a/drivers/gpu/drm/drm_blend.c
> +++ b/drivers/gpu/drm/drm_blend.c
> @@ -185,6 +185,21 @@
>   *  plane does not expose the "alpha" property, then this is
>   *  assumed to be 1.0
>   *
> + * pixel_source:
> + * pixel_source is set up with drm_plane_create_pixel_source_property().
> + * It is used to toggle the active source of pixel data for the plane.
> + * The plane will only display data from the set pixel_source -- any
> + * data from other sources will be ignored.
> + *
> + * Possible values:
> + *
> + * "NONE":
> + * No active pixel source.
> + * Committing with a NONE pixel source will disable the plane.
> + *
> + * "FB":
> + * Framebuffer source set by the "FB_ID" property.
> + *
>   * Note that all the property extensions described here apply either to the
>   * plane or the CRTC (e.g. for the background color, which currently is not
>   * exposed and assumed to be black).
> @@ -615,3 +630,73 @@ int drm_plane_create_blend_mode_property(struct 
> drm_plane *plane,
> return 0;
>  }
>  EXPORT_SYMBOL(drm_plane_create_blend_mode_property);
> +
> +/**
> + * drm_plane_create_pixel_source_property - create a new pixel source 
> property
> + * @plane: DRM plane
> + * @extra_sources: Bitmask of additional supported pixel_sources for the 
> driver.
> + *DRM_PLANE_PIXEL_SOURCE_FB always be enabled as a supported
> + *source.
> + *
> + * This creates a new property describing the current source of pixel data 
> for the
> + * plane. The pixel_source will be initialized as DRM_PLANE_PIXEL_SOURCE_FB 
> by default.
> + *
> + * Drivers can set a custom default source by overriding the pixel_source 
> value in
> + * drm_plane_funcs.reset()
> + *
> + * The property is exposed to userspace as an enumeration property called
> + * "pixel_source" and has the following enumeration values:
> + *
> + * "NONE":
> + *  No active pixel source
> + *
> + * "FB":
> + * Frameb

Re: [Freedreno] [PATCH v2] drm/msm/dpu: Drop encoder vsync_event

2023-08-04 Thread Dmitry Baryshkov
On Wed, 2 Aug 2023 at 20:01, Jessica Zhang  wrote:
>
> Drop vsync_event and vsync_event_work handlers as they are unnecessary.
> In addition drop the dpu_enc_ktime_template event class as it will be
> unused after the vsync_event handlers are dropped.
>
> Signed-off-by: Jessica Zhang 
> ---
> Changes in v2:
> - Dropped dpu_enc_early_kickoff event and dpu_enc_ktime_template event class
> - Link to v1: 
> https://lore.kernel.org/r/20230801-encoder-cleanup-v1-1-f9e37fe27...@quicinc.com
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 65 
> +
>  drivers/gpu/drm/msm/disp/dpu1/dpu_trace.h   | 23 --
>  2 files changed, 1 insertion(+), 87 deletions(-)

Reviewed-by: Dmitry Baryshkov 



-- 
With best wishes
Dmitry


[Freedreno] [PATCH] drm/msm/dpu: initialise clk_rate to 0 in _dpu_core_perf_get_core_clk_rate

2023-08-04 Thread Dmitry Baryshkov
When removing the core perf tune overrides, I also occasionaly removed the
initialisation of the clk_rate variable. Initialise it to 0 to let max()
correctly calculate the maximum of requested clock rates.

Reported-by: Dan Carpenter 
Fixes: 6a4bc73915af ("drm/msm/dpu: drop separate dpu_core_perf_tune overrides")
Signed-off-by: Dmitry Baryshkov 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 0b54a659a940..e843e5f3b3dd 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -289,6 +289,7 @@ static u64 _dpu_core_perf_get_core_clk_rate(struct dpu_kms 
*kms)
if (kms->perf.perf_tune.mode == DPU_PERF_MODE_MINIMUM)
return kms->perf.max_core_clk_rate;
 
+   clk_rate = 0;
drm_for_each_crtc(crtc, kms->dev) {
if (crtc->enabled) {
dpu_cstate = to_dpu_crtc_state(crtc->state);
-- 
2.39.2



Re: [Freedreno] [bug report] drm/msm/dpu: drop separate dpu_core_perf_tune overrides

2023-08-04 Thread Dmitry Baryshkov
Hi Dan,

On Fri, 4 Aug 2023 at 08:37, Dan Carpenter  wrote:
>
> Hello Dmitry Baryshkov,
>
> The patch 6a4bc73915af: "drm/msm/dpu: drop separate
> dpu_core_perf_tune overrides" from Jul 30, 2023 (linux-next), leads
> to the following Smatch static checker warning:
>
> drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c:295 
> _dpu_core_perf_get_core_clk_rate()
> error: uninitialized symbol 'clk_rate'.

Thank you for your report, I will initialise it to 0.

>
> drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
> 280 static u64 _dpu_core_perf_get_core_clk_rate(struct dpu_kms *kms)
> 281 {
> 282 u64 clk_rate;
> 283 struct drm_crtc *crtc;
> 284 struct dpu_crtc_state *dpu_cstate;
> 285
> 286 if (kms->perf.perf_tune.mode == DPU_PERF_MODE_FIXED)
> 287 return kms->perf.fix_core_clk_rate;
> 288
> 289 if (kms->perf.perf_tune.mode == DPU_PERF_MODE_MINIMUM)
> 290 return kms->perf.max_core_clk_rate;
> 291
> 292 drm_for_each_crtc(crtc, kms->dev) {
> 293 if (crtc->enabled) {
> 294 dpu_cstate = to_dpu_crtc_state(crtc->state);
> --> 295 clk_rate = 
> max(dpu_cstate->new_perf.core_clk_rate,
> 296 clk_rate);
> 
> Never initialized
>
> 297 }
> 298 }
> 299
> 300 return clk_rate;
> 301 }
>
> regards,
> dan carpenter



-- 
With best wishes
Dmitry


Re: [Freedreno] [PATCH v3 4/9] PM / QoS: Decouple request alloc from dev_pm_qos_mtx

2023-08-04 Thread Dan Carpenter
Hi Rob,

kernel test robot noticed the following build warnings:

https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:
https://github.com/intel-lab-lkp/linux/commits/Rob-Clark/PM-devfreq-Drop-unneed-locking-to-appease-lockdep/20230804-060505
base:   git://anongit.freedesktop.org/drm/drm-misc drm-misc-next
patch link:
https://lore.kernel.org/r/20230803220202.78036-5-robdclark%40gmail.com
patch subject: [PATCH v3 4/9] PM / QoS: Decouple request alloc from 
dev_pm_qos_mtx
config: i386-randconfig-m021-20230730 
(https://download.01.org/0day-ci/archive/20230804/202308041725.npwswh0l-...@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce: 
(https://download.01.org/0day-ci/archive/20230804/202308041725.npwswh0l-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Reported-by: Dan Carpenter 
| Closes: https://lore.kernel.org/r/202308041725.npwswh0l-...@intel.com/

smatch warnings:
drivers/base/power/qos.c:973 dev_pm_qos_update_user_latency_tolerance() warn: 
possible memory leak of 'req'

vim +/req +973 drivers/base/power/qos.c

2d984ad132a87c Rafael J. Wysocki 2014-02-11  923  int 
dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
2d984ad132a87c Rafael J. Wysocki 2014-02-11  924  {
b5ac35ff4296f7 Rob Clark 2023-08-03  925struct 
dev_pm_qos_request *req = NULL;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  926int ret;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  927  
211fb32e3a0547 Rob Clark 2023-08-03  928ret = 
dev_pm_qos_constraints_ensure_allocated(dev);
211fb32e3a0547 Rob Clark 2023-08-03  929if (ret)
211fb32e3a0547 Rob Clark 2023-08-03  930return ret;
211fb32e3a0547 Rob Clark 2023-08-03  931  
b5ac35ff4296f7 Rob Clark 2023-08-03  932if 
(!dev->power.qos->latency_tolerance_req)
b5ac35ff4296f7 Rob Clark 2023-08-03  933req = 
kzalloc(sizeof(*req), GFP_KERNEL);
b5ac35ff4296f7 Rob Clark 2023-08-03  934  
2d984ad132a87c Rafael J. Wysocki 2014-02-11  935
mutex_lock(&dev_pm_qos_mtx);
2d984ad132a87c Rafael J. Wysocki 2014-02-11  936  
211fb32e3a0547 Rob Clark 2023-08-03  937if 
(!dev->power.qos->latency_tolerance_req) {
2d984ad132a87c Rafael J. Wysocki 2014-02-11  938if (val < 0) {
80a6f7c79b7822 Andrew Lutomirski 2016-11-29  939if (val 
== PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT)
80a6f7c79b7822 Andrew Lutomirski 2016-11-29  940
ret = 0;
80a6f7c79b7822 Andrew Lutomirski 2016-11-29  941else
2d984ad132a87c Rafael J. Wysocki 2014-02-11  942
ret = -EINVAL;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  943goto 
out;

kfree(req);?

2d984ad132a87c Rafael J. Wysocki 2014-02-11  944}
2d984ad132a87c Rafael J. Wysocki 2014-02-11  945if (!req) {
2d984ad132a87c Rafael J. Wysocki 2014-02-11  946ret = 
-ENOMEM;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  947goto 
out;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  948}
2d984ad132a87c Rafael J. Wysocki 2014-02-11  949ret = 
__dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY_TOLERANCE, val);
2d984ad132a87c Rafael J. Wysocki 2014-02-11  950if (ret < 0) {
2d984ad132a87c Rafael J. Wysocki 2014-02-11  951
kfree(req);
2d984ad132a87c Rafael J. Wysocki 2014-02-11  952goto 
out;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  953}
2d984ad132a87c Rafael J. Wysocki 2014-02-11  954
dev->power.qos->latency_tolerance_req = req;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  955} else {
b5ac35ff4296f7 Rob Clark 2023-08-03  956/*
b5ac35ff4296f7 Rob Clark 2023-08-03  957 * If we raced 
with another thread to allocate the request,
b5ac35ff4296f7 Rob Clark 2023-08-03  958 * simply free 
the redundant allocation and move on.
b5ac35ff4296f7 Rob Clark 2023-08-03  959 */
b5ac35ff4296f7 Rob Clark 2023-08-03  960if (req)
b5ac35ff4296f7 Rob Clark 2023-08-03  961
kfree(req);
b5ac35ff4296f7 Rob Clark 2023-08-03  962  
2d984ad132a87c Rafael J. Wysocki 2014-02-11  963if (val < 0) {
2d984ad132a87c Rafael J. Wysocki 2014-02-11  964
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE);
2d984ad132a87c Rafael J. Wysocki 2014-02-11  965ret = 0;
2d984ad132a87c Rafael J. Wysocki 2014-02-11  966} else {
2d984ad

[Freedreno] [PATCH] drm/msm/dpu: clean up some inconsistent indenting

2023-08-04 Thread Jiapeng Chong
No functional modification involved.

drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c:183 dpu_core_perf_crtc_check() 
warn: inconsistent indenting.

Reported-by: Abaci Robot 
Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6096
Signed-off-by: Jiapeng Chong 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 0b54a659a940..1c0e7e5480e4 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -177,10 +177,10 @@ int dpu_core_perf_crtc_check(struct drm_crtc *crtc,
to_dpu_crtc_state(tmp_crtc->state);
 
DRM_DEBUG_ATOMIC("crtc:%d bw:%llu ctrl:%d\n",
-   tmp_crtc->base.id, tmp_cstate->new_perf.bw_ctl,
-   tmp_cstate->bw_control);
+tmp_crtc->base.id, 
tmp_cstate->new_perf.bw_ctl,
+tmp_cstate->bw_control);
 
-   bw_sum_of_intfs += tmp_cstate->new_perf.bw_ctl;
+   bw_sum_of_intfs += tmp_cstate->new_perf.bw_ctl;
}
 
/* convert bandwidth to kb */
-- 
2.20.1.7.g153144c